Power Up - Upskill Yourself...

Normal view

Before yesterdayMain stream
  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • Static or Dynamic Segments? A Complete Guide for Customer Insights – Journeys
    Dynamics 365 Customer Insights – Journeys is one of the most preferred modules offered by Microsoft, which provides a flexible platform that enables organizations to engage their audience across every stage of the customer journey. It empowers businesses to create personalized, timely, and meaningful interactions based on customer behavior and preferences. As we all know, understanding your customers is not a one-time activity; it is a continuous journey that evolves with every interaction. Tar
     

Static or Dynamic Segments? A Complete Guide for Customer Insights – Journeys

Static or Dynamic Segments A Complete Guide for Customer Insights – Journeys

Dynamics 365 Customer Insights – Journeys is one of the most preferred modules offered by Microsoft, which provides a flexible platform that enables organizations to engage their audience across every stage of the customer journey.

It empowers businesses to create personalized, timely, and meaningful interactions based on customer behavior and preferences. As we all know, understanding your customers is not a one-time activity; it is a continuous journey that evolves with every interaction. Targeting the right audience to perform any marketing activity is the key to success.

When running marketing activities in Customer Insights – Journeys, the most important question to answer is:

“Who exactly should receive this message or journey?”

This is where Segments play a key role.

What Are Segments in Customer Insights – Journeys?

Segments in Customer Insights – Journeys allow you to group Contacts or Leads based on a defined set of attributes or behaviors. These segments act as the foundation for targeting audiences in real-time journeys, emails, and other marketing engagements.

Using segments, you can:

  • Filter Contacts or Leads using specific conditions
  • Target audiences based on demographic or behavioral data
  • Ensure messages reach the most relevant audience

Segments help transform generic marketing campaigns into highly targeted and strategic journeys.

Types of Segments in Customer Insights – Journeys

There are two types of segments available in Customer Insights – Journeys:

  1. Automatic Refresh (Dynamic Segment)
  2. Static Snapshot (Static Segment)

In earlier versions of the Marketing app, users could choose the segment type directly while clicking the New Segment button. In the current Real-time Journeys experience, this behavior has changed.

Now, you must:

  1. Create the segment first
  2. Define the segment type later from the Settings panel

Where to Find Segments in Customer Insights – Journeys

To access segments:

  • Go to Customer Insights – Journeys
  • Navigate to Real-time journeys
  • Select Audience → Segments

At the segment record level, you will notice a Type field that indicates whether the segment is configured as a Static Snapshot or an Automatic Refresh.

Dynamics 365 Customer Insights Journeys Segments

How to Create a Segment in the New Experience

When you click New Segment, you are no longer prompted to choose between Static or Dynamic upfront. Instead, the segment is created first, and its behavior is defined later.

During creation, you have two options:

  1. Using Query Assist (Copilot)

Query Assist allows Copilot AI to help generate segment logic.

  • Start typing in the Query Assist box
  • Select a predefined prompt such as “Contacts who opened an email”
  • Click Create

You can either:

  • Click Use to apply the suggested query
  • Or click Create manually to skip AI assistance

Once selected, Copilot helps build the initial query structure based on the chosen prompt.

Dynamics 365 Customer Insights Journeys Segments

During creation, you have two options:

  1. Using Query Assist (Copilot)

Query Assist allows Copilot AI to help generate segment logic.

  • Start typing in the Query Assist box
  • Select a predefined prompt such as “Contacts who opened an email”
  • Click Create

Dynamics 365 Customer Insights Journeys Segments

You can either:

  • Click Use to apply the suggested query
  • Or click Create manually to skip AI assistance

Dynamics 365 Customer Insights Journeys Segments

Once selected, Copilot helps build the initial query structure based on the chosen prompt.

Dynamics 365 Customer Insights Journeys Segments

You can find more details in this doc for understanding the building segment with “Query Assist”.

  1. Creating a Segment Manually

If you prefer full control:

  • Click Create manually, or
  • Leave the Query Assist box empty and click Create

This opens the Segment Builder, where you can define your logic from scratch.

Dynamics 365 Customer Insights Journeys Segments

Building Segment Logic Using the Segment Builder

Inside the Segment Builder, you can define segment criteria using:

  • Attribute-based conditions (e.g., Industry, Country, Job Title)
  • Behavioral conditions (e.g., email opens, form submissions)
  • Include or Exclude specific Leads or Contacts

You can explicitly include or exclude records. Even if a record does not meet the defined conditions, it will always be included or excluded if specified manually.

If you include or exclude any lead explicitly, then even if it has not met the pre-defined condition, it will always be filtered and will be included or excluded from the segment member list.

When you open Segment Builder, you can start creating a new group by clicking on the desired options (either Attribute or Behavioral). if required, you can Include or Exclude a particular audience as well.

Refer screenshots below:

Dynamics 365 Customer Insights Journeys Segments

Dynamics 365 Customer Insights Journeys Segments

Example Use Case: Targeting Manufacturing Leads from India

Let’s consider a practical example.

Use case:
Target Leads from the Manufacturing sector located in India.

Segment conditions:

  • Industry equals Manufacturing Services
  • Country/Region equals India

You can create an Attribute group and define these conditions accordingly. Once the logic is complete, save the segment.

Before activating it, you can preview the audience size.

Estimating Segment Size Before Activation

Before marking a segment as Ready to use, you can:

  • Click Estimate to preview the expected number of segment members
  • Review the estimated member count to validate your logic

This helps ensure your targeting criteria are accurate before using the segment in a journey.

Dynamics 365 Customer Insights Journeys Segments

Dynamics 365 Customer Insights Journeys Segments

The members count can be previewed from here:

Dynamics 365 Customer Insights Journeys Segments

Segment Settings: Static Snapshot vs Automatic Refresh

The Settings panel is where the segment type is defined.

By default, all newly created segments are set to Automatic Refresh.

Dynamics 365 Customer Insights Journeys Segments

Let us consider one of the Dynamic Segment graphs. As you can see in the graph below, the segment size has been updated (increased) over the duration.

Dynamics 365 Customer Insights Journeys Segments

If there is a use case where you want to create the Segment as Static, then explicitly you have to set it as “Static Snapshot” option as shown below:

Dynamics 365 Customer Insights Journeys Segments

In “Static Snapshot,” the simple segment size will not be updated dynamically, and it is of one-time use.

Let us consider one of the Static Segment graphs, as you can see in the graph below, the segment size has remained standstill over the duration, as it was a one-time activity.

Dynamics 365 Customer Insights Journeys Segments

Key Differences: Static Snapshot vs Automatic Refresh

Feature Static Snapshot Automatic Refresh
Membership updates No Yes
Audience type Fixed Dynamic
Best suited for One-time campaigns Ongoing journeys
Data refresh One-time Continuous
Real-time targeting Not supported Supported

 

FAQs

What Is Automatic Refresh (Dynamic Segment)?

In Automatic Refresh, the segment membership updates dynamically.

This means:

  • New Contacts or Leads that meet the criteria are automatically added
  • Records that no longer meet the criteria are removed
  • The segment size changes continuously over time

Dynamic segments are ideal for:

  • Ongoing marketing journeys
  • Real-time audience targeting
  • Long-running nurture campaigns

You can observe these changes visually through segment growth graphs, where the member count increases or decreases over time.

What Is Static Snapshot (Static Segment)?

In Static Snapshot, the segment captures audience members at a specific point in time.

This means:

  • Segment membership does not update after activation
  • The audience remains fixed
  • It is typically used for one-time activities

Static Snapshot segments are best suited for:

  • One-time email campaigns
  • Event invitations
  • Compliance or audit-based targeting

Segment graphs for Static Snapshot segments show a flat line, indicating no change in membership over time.

When Should You Use Each Segment Type?

  • Use Automatic Refresh when your audience changes frequently and journeys run continuously.
  • Use Static Snapshot when you need a fixed audience for a specific moment or campaign.

Choosing the right segment type ensures accurate targeting and optimal journey performance.

Conclusion

Segments play a critical role in successfully targeting audiences within Customer Insights – Journeys. Whether you are grouping customers based on demographic attributes or behavioral interactions, segments allow you to make your marketing more strategic and data-driven.

Automatic Refresh segments are ideal for real-time, evolving journeys, while Static Snapshot segments are best suited for one-time or fixed audience scenarios. Understanding the difference between these two options helps you design more effective journeys and deliver the right message to the right audience at the right time.

The post Static or Dynamic Segments? A Complete Guide for Customer Insights – Journeys first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • Automating Business PDFs Using Azure Document Intelligence and Power Automate
    In today’s data-driven enterprises, critical business information often arrives in the form of PDFs—bank statements, invoices, policy documents, reports, and contracts. Although these files contain valuable information, turning them into structured, reusable data or finalized business documents often requires significant manual effort and is highly error-prone. By leveraging Azure Document Intelligence (for PDF data extraction), Azure Functions (for custom business logic), and Power Automate (f
     

Automating Business PDFs Using Azure Document Intelligence and Power Automate

In today’s data-driven enterprises, critical business information often arrives in the form of PDFs—bank statements, invoices, policy documents, reports, and contracts. Although these files contain valuable information, turning them into structured, reusable data or finalized business documents often requires significant manual effort and is highly error-prone.
By leveraging Azure Document Intelligence (for PDF data extraction), Azure Functions (for custom business logic), and Power Automate (for workflow orchestration) together, businesses can create a seamless automation pipeline that interprets PDF content, transforms extracted information through business rules, and produces finalized documents automatically, eliminating repetitive manual work and improving overall efficiency.
In this blog, we will explore how these Azure services work together to automate document creation from business PDFs in a scalable and reliable way.

Use Case: Automatically Converting Bank Statement PDFs into CSV Files

Let’s consider a potential use case.
The finance team receives bank statements as PDF attachments in a shared mailbox on a regular basis. These statements contain transaction details in tabular format, but extracting the data manually into Excel or CSV files is time-consuming and often leads to formatting issues such as broken rows, missing dates, and incorrect debit or credit values.
The goal is to automatically process these emailed PDF bank statements as soon as they arrive, extract the transaction data accurately, and generate a clean, structured CSV file that can be directly used for reconciliation and financial reporting.
By using Power Automate to monitor incoming emails, Azure Document Intelligence to analyze the PDFs, and Azure Functions to apply custom data-cleaning logic, the entire process can be automated, eliminating manual effort and ensuring consistent, reliable output.
Let’s walk through the steps below to achieve this requirement.

Prerequisites:

Before we get started, we need to have the following things ready:
• Azure subscription.
• Access to Power Automate to create email-triggered flows.
• Visual Studio 2022

Step 1:

Navigate to the Azure portal (https://portal.azure.com), search for the Azure Document Intelligence service, and click Create to provision a new resource.

Azure Document Intelligence

Step 2:

Choose Azure subscription 1 as the subscription, create a new resource group, enter an appropriate name for the Document Intelligence instance, select the desired pricing tier, and click Review + Create to proceed.

Azure Document Intelligence

Step 3:

After reviewing the configuration, click Create and wait for the deployment to complete. Once the deployment is finished, select Go to resource.

Azure Document Intelligence

Step 4:

Navigate to the newly created Document Intelligence resource, and make a note of the endpoint and any one of the keys listed at the bottom of the page.

Azure Document Intelligence

Step 5:

Create a new Azure Function in Visual Studio 2022 using an HTTP trigger with the .NET isolated worker model, and add the following code.

[Function("PdfToCsvExtractor")]
public async Task Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "post")] HttpRequest req)
{
_logger.LogInformation("Form Recognizer extraction triggered.");

// Accept either multipart/form-data (file field) OR raw application/pdf bytes.
Stream pdfStream = null;

try
{
// If content-type is multipart/form-data => read form and file
if (req.HasFormContentType)
{
var form = await req.ReadFormAsync();
var file = form.Files?.FirstOrDefault();
if (file == null || file.Length == 0)
return new BadRequestObjectResult("No file was uploaded in the multipart form-data.");

pdfStream = new MemoryStream();
await file.CopyToAsync(pdfStream);
pdfStream.Position = 0;
}
else
{
// Otherwise expect raw PDF bytes with Content-Type: application/pdf
if (!req.Body.CanRead)
return new BadRequestObjectResult("Request body empty.");

pdfStream = new MemoryStream();
await req.Body.CopyToAsync(pdfStream);
pdfStream.Position = 0;
}

string endpoint = Environment.GetEnvironmentVariable("FORM_RECOGNIZER_ENDPOINT");
string key = Environment.GetEnvironmentVariable("FORM_RECOGNIZER_KEY");
if (string.IsNullOrEmpty(endpoint) || string.IsNullOrEmpty(key))
return new BadRequestObjectResult("Missing Form Recognizer environment variables.");

var credential = new AzureKeyCredential(key);
var client = new DocumentAnalysisClient(new Uri(endpoint), credential);

var operation = await client.AnalyzeDocumentAsync(
WaitUntil.Completed,
"prebuilt-document",
pdfStream
);
var result = operation.Value;
_logger.LogInformation("pdfstream: " + pdfStream);

_logger.LogInformation("Result: "+ result.Tables.ToList());

// returns raw JSON table data
var filteredTables = result.Tables.ToList());
if (filteredTables.Count == 0)
return new BadRequestObjectResult("No transaction table found.");

string csvOutput = BuildCsvFromTables(filteredTables);

var csvBytes = Encoding.UTF8.GetBytes(csvOutput);

var emailResult = await SendEmailWithCsvAsync(
_logger,
csvBytes,
"ExtractedTable.csv");

return new OkObjectResult(“Table data extracted and exported to csv file”);
}
catch (Exception ex)
{
_logger.LogError(ex, ex.Message);
return new StatusCodeResult(500);
}
finally
{
pdfStream?.Dispose();
}
}

//method to create csv file
private string BuildCsvFromTables(IReadOnlyList tables)
{
var csvBuilder = new StringBuilder();
// Write CSV header
csvBuilder.AppendLine("Date,Transaction,Debit,Credit,Balance");
foreach (var table in tables)
{
// Group cells by row index
var rows = table.Cells
.GroupBy(c => c.RowIndex)
.OrderBy(g => g.Key);
foreach (var row in rows)
{
// Skip header row (row index 0)
if (row.Key == 0)
continue;
var rowValues = new string[5];
foreach (var cell in row)
{
if (cell.ColumnIndex < rowValues.Length)
{
// Clean commas and line breaks for CSV safety
rowValues[cell.ColumnIndex] =
cell.Content.Replace(",", " ").Replace("\n", " ").Trim();
}
}
csvBuilder.AppendLine(string.Join(",", rowValues));
}
}
return csvBuilder.ToString();
}

// method to send csv file as an attachment to an email
public async Task SendEmailWithCsvAsync(
ILogger log,
byte[] csvBytes,
string csvFileName)
{
log.LogInformation("Inside AzureSendEmailOnSuccess");

string clientId = Environment.GetEnvironmentVariable("InogicFunctionApp_client_id");
string clientSecret =Environment.GetEnvironmentVariable("InogicFunctionApp_client_secret");
string tenantId = Environment.GetEnvironmentVariable("Tenant_ID");
string receiverEmail = Environment.GetEnvironmentVariable("ReceiverEmail");
string senderEmail = Environment.GetEnvironmentVariable("SenderEmail");

var missing = new List();

if (string.IsNullOrEmpty(clientId)) missing.Add(nameof(clientId));
if (string.IsNullOrEmpty(clientSecret)) missing.Add(nameof(clientSecret));
if (string.IsNullOrEmpty(tenantId)) missing.Add(nameof(tenantId));
if (string.IsNullOrEmpty(receiverEmail)) missing.Add(nameof(receiverEmail));
if (string.IsNullOrEmpty(senderEmail)) missing.Add(nameof(senderEmail));

if (missing.Count > 0)
{
return new BadRequestObjectResult(
new { message = "Missing: " + string.Join(", ", missing) }
);
}

var app = ConfidentialClientApplicationBuilder
.Create(clientId)
.WithClientSecret(clientSecret)
.WithAuthority($"https://login.microsoftonline.com/{tenantId}")
.Build();

var result = await app.AcquireTokenForClient(
new[] { "https://graph.microsoft.com/.default" })
.ExecuteAsync();

string token = result.AccessToken;

string emailBody =
"Hello,

"
+ "Please find attached the extracted CSV.

"
+ "Regards,
Inogic Developer.";

var attachment = new Dictionary<string, object>
{
{ "@odata.type", "#microsoft.graph.fileAttachment" },
{ "name", csvFileName },
{ "contentType", "text/csv" },
{ "contentBytes", Convert.ToBase64String(csvBytes) }
};

var emailPayload = new Dictionary<string, object>
{
{
"message",
new Dictionary<string, object>
{
{ "subject", "Extracted PDF Table CSV" },
{
"body",
new Dictionary<string, object>
{
{ "contentType", "HTML" },
{ "content", emailBody }
}
},
{
"toRecipients",
new[]
{
new Dictionary<string, object>
{
{
"emailAddress",
new Dictionary<string, object>
{
{ "address", receiverEmail }
}
}
}
}
},
{ "attachments", new[] { attachment } }
}
},
{ "saveToSentItems", "false" }
};

string json = JsonSerializer.Serialize(emailPayload);

using var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Authorization =
new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", token);

var httpContent = new StringContent(json, Encoding.UTF8, "application/json");

var response = await httpClient.PostAsync(
$"https://graph.microsoft.com/v1.0/users/{senderEmail}/sendMail",
httpContent
);

if (response.IsSuccessStatusCode)
return new OkObjectResult("CSV Email sent successfully.");

string errorBody = await response.Content.ReadAsStringAsync();
log.LogError($"Graph Error: {response.StatusCode} - {errorBody}");
return new StatusCodeResult(500);
}

Step 6:

Build the Azure Function project in Visual Studio and publish it to the Azure portal.

Step 7:

Open https://make.powerautomate.com and create a new cloud flow using the When a new email arrives in a shared mailbox (V2) trigger. Enter the shared mailbox email address in Original Mailbox Address, and set both Only with Attachments and Include Attachments to Yes.

Azure Document Intelligence

Step 8:

Add a Condition action to verify that the attachment type is PDF.

Azure Document Intelligence

Step 9:

If the condition is met, in the Yes branch add the Get Attachment (V2) action. Configure Message Id using the value from the trigger and Attachment Id using the value from the current loop item and the email address of the shared mailbox.

Azure Document Intelligence

Step 10:

Add a Compose action to convert the attachment content bytes to Base64 using the following expression:
base64(outputs(‘Get_Attachment_(V2)’)?[‘body/contentBytes’])

Step 11:

Add another Compose action to convert the Base64 output from the previous step into a string using:
base64ToString(outputs(‘Compose’))

Step 12:

Add an HTTP (Premium) action, set the method to POST, provide the URL of the published Azure Function, and configure the request body as shown below:

{
"$content-type": "application/pdf",
"$content": "@{outputs('Compose_2')}"
}

Azure Document Intelligence

To test the setup, send an email to the shared mailbox with the sample PDF attached.
Note: For demonstration purposes, a simplified one-page bank statement PDF is used. Real-world bank statements may contain multi-page tables, wrapped rows, and inconsistent layouts, which are handled through additional parsing logic.

Input PDF file:

Azure Document Intelligence

Output CSV file:

Azure Document Intelligence

Conclusion:

This blog demonstrated how an email-driven automation pipeline can simplify the processing of business PDFs by converting them into structured, usable data.
By combining Power Automate for orchestration, Azure Functions for custom processing, and Azure Document Intelligence for AI-based document analysis, organizations can build scalable, reliable, and low-maintenance document automation solutions that eliminate manual effort and reduce errors.

Frequently Asked Questions:

1. What is Azure Document Intelligence used for?
Azure Document Intelligence is used to extract structured data from unstructured documents such as PDFs, images, invoices, receipts, contracts, and bank statements using AI models.

2. How does Azure Document Intelligence extract data from PDF files?
It analyzes PDF content using prebuilt or custom AI models to identify text, tables, key-value pairs, and document structure, and returns the extracted data in a structured JSON format.

3. Can Power Automate process PDF attachments automatically?
Yes. Power Automate can automatically detect incoming PDF attachments from email, SharePoint, or OneDrive and trigger workflows to process them using Azure services.

4. How do Azure Functions integrate with Power Automate?
Power Automate can call Azure Functions via HTTP actions, allowing custom business logic, data transformation, and validation to run as part of an automated workflow.

5. Is Azure Document Intelligence suitable for bank statements and invoices?
Yes. Azure Document Intelligence can accurately extract tables, transaction data, and key fields from bank statements, invoices, and other financial documents.

The post Automating Business PDFs Using Azure Document Intelligence and Power Automate first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

Building Standalone Apps with Power Apps Code Apps: Using Dataverse and Office 365 Users Connectors (Part 1)

Power Apps

In the Dynamics 365 and Power Apps ecosystem, we have several options for building applications, each one is for a specific type of requirement. Model-driven Apps works well when we need a structured UI with standard components, while we use Canvas Apps to create custom, mobile-friendly interfaces with a low-code approach. Recently, Microsoft introduced another application type called Code Apps, which offers a completely different way to build applications using pro code approach.

With the introduction of  Power Apps Code Apps, things have changed. Code Apps let us build standalone single page applications using modern web frameworks. These are independent applications that cannot be integrated with Canvas Apps or Model-driven Apps.

The best part is that we get direct access to more than 1,500 standard and premium connectors through the Power Apps SDK. We do not have to write any authentication code, no OAuth flows, no custom APIs, no middleware. We just have to connect and use.

In this article, we’ll walk you through creating a Code App from scratch. We’ll build Personal Dashboard, a simple application that pulls assigned cases and leads from Dataverse and shows current logged in user details using the Office 365 Users and Dataverse connectors.

What Makes Code Apps Different?

We can build a UI of our own choice and connect to a wide range of data sources using more than 1,500 standard and premium connectors provided by the Power Platform. All connections are secure because the Power Apps SDK handles authentication, and each connector enforces user-level permissions. This means the app can only access data that the signed-in user is allowed to see, so there’s no need to write custom authentication code.

Code Apps provide a balanced approach with several key advantages:

  • A standalone application that runs directly within Power Platform
  • Full development with modern web frameworks such as React, Vue, or Angular, with support for  your preferred libraries
  • Direct access to connectors through the Power Apps SDK without custom authentication code
  • Streamlined deployment through a single command to your environment

The connector integration is particularly valuable. Whether the need is to query Dataverse, access current user profile details, or use other services, the connector can be called directly. There’s no need to configure service principals, manage app registrations, or implement token management. The integration works seamlessly within the platform.

Prerequisites

Before getting started, we have to make sure the following prerequisites are in place:

  • Power Apps Premium license with Code Apps enabled environment
  • Visual Studio Code installed
  • Node.js LTS version
  • Power Platform Tools for VS Code extension

Step 1: Setting Up the Code App

Let’s create the app. Open VS Code, launch a PowerShell terminal, and run the following command:

npm create vite@latest PersonalDashboard — –template react-ts

For this application, we are using React as the framework and TypeScript as the variant. After that, navigate to the project folder and install the dependencies:

npm install

Install the node type definitions:

npm i –save-dev @types/node

After executing these commands, the project structure will appear as shown in the image below.

PowerAppsCode

According to the official Microsoft documentation, the Power Apps SDK currently requires the port to be 3000 in the default configuration. To configure this, open vite.config.ts and replace the content with the following code:

import { defineConfig } from 'vite'

import react from '@vitejs/plugin-react'

import * as path from 'path'

 

// https://vite.dev/config/

export default defineConfig({

base: "./",

server: {

host: "::",

port: 3000,

},

plugins: [react()],

resolve: {

alias: {

"@": path.resolve(__dirname, "./src"),

},

},

});

Note for Mac users: It may be necessary to modify the package.json scripts section.

Change from:

"scripts":  {

"dev": "start vite && start pac code run",

"build": "tsc -b && vite build",

"lint": "eslint .",

"preview": "vite preview"

}

to this

"scripts": {
"dev": "vite && pac code run",
"build": "tsc -b && vite build",
"lint": "eslint .",
"preview": "vite preview"
}

Save the file and run the Code App locally by executing:

npm run dev

Browse to http://localhost:3000. If the application loads successfully, press Ctrl+C to stop the server.

Step 2: Initialize the Code App

First authenticate to Power Platform:

pac auth create

After that, sign in with the credentials and select the environment:

pac env select -env <environment-url>

Initialize the Code App:

pac code init –displayName “Personal Dashboard”

This will create a power.config.json file in the project as shown in the image below.

PowerAppsCode

Now install the Power Apps SDK. This package provides APIs that allow the application to interact directly with Power Platform services and includes built-in logic to manage connections automatically as they are added or removed.

npm install –save-dev “@microsoft/power-apps

Update package.json to run both Vite and the Power Apps SDK server:

"scripts": {
"dev": "start pac code run && vite",
"build": "tsc -b && vite build",
"lint": "eslint .",
"preview": "vite preview"
}

Step 3: Configure Power Provider

 

Create PowerProvider.tsx under src and add the Power SDK context provider code given below.

 

import { initialize } from "@microsoft/power-apps/app";

import { useEffect, type ReactNode } from "react";

interface PowerProviderProps {

children: ReactNode;

}

export default function PowerProvider({ children }: PowerProviderProps) {

useEffect(() => {

const initApp = async () => {

try {

await initialize();

console.log('Power Platform SDK initialized successfully');

} catch (error) {

console.error('Failed to initialize Power Platform SDK:', error);

}

};

initApp();

}, []);

return <>{children}</>;

}

Update the main.tsx and add this line in the imports section:

import PowerProvider from './PowerProvider.tsx'

and change this code snippet

<StrictMode>
<App />
</StrictMode>,

to this

<StrictMode>

<PowerProvider>

<App />

</PowerProvider>

</StrictMode>,

Run the app by executing :

npm run dev

Open the URL provided by the Power SDK Server in the same browser profile as that of power platform tenant.

Step 4: Adding Dataverse Connector

Now comes the part where we will add the data source to our application. In this step, we’ll use the Dataverse connector to fetch assigned cases and leads for the logged-in user.

For that First, we need to create a connection:

1. Go to Power Apps and open Connections.

2. Click New Connection and select Dataverse.

Follow the instruction properly to create the connection, as shown in the

PowerAppsCode

Once the connection is ready, we have to open the terminal. For Dataverse, we have to add the tables required for the application. For this example, we’ll add the Leads and Incident (Cases) tables using the following commands:

pac code add-data-source -a dataverse -t lead

pac code add-data-source -a dataverse -t incident

PowerAppsCode

After running these commands, we can see that some files and folders are added to the project. Inside the generated folder, there are services and models folders. These contain the files for Leads, Incidents, and other tables, which can be used in the code. For example:

import { AccountsService } from './generated/services/AccountsService';

Import type { Accounts } from './generated/models/AccountsModel';

CRUD operations can be performed on Dataverse using the app. Before accessing any data, we have to initialize the Power Apps SDK to avoid errors. An async function or state check can ensure the SDK is ready before making API calls. For example:

useEffect(() => {

// Define a function of asynchronous type to properly initialize the Power Apps SDK to avoid any error during runtime

 

const init = async () => {

try {

await initialize(); // Wait for SDK initialization

setIsInitialized(true); // Mark the app as ready for data operations

} catch (err) {

setError('Failed to initialize Power Apps SDK'); // Handle initialization errors

setLoading(false); // Stop any loading indicators

}

};

 

init(); // Call the initialization function when the component mounts

}, []);

 

useEffect(() => {

If (!isInitialized) return;

 

// Place your data reading logic here

}, []);


 

Step 5: Adding Office 365 Users Connector

Similar to Dataverse, we need to create a connection for Office 365 Users by following the same steps. Once the connection is ready, we need to add it to the application. First, list all available connections to get the connection ID using command:

pac connection list

It will list all the connections available in the selected environment. We need to Copy the connection ID for Office 365 Users from the list, then add it to the project using:

pac code add-data-source -a “shared_office365users” -c “<connection-id>”

After running this command, the Office 365 Users connector will be available to use in the application, allowing access to user profiles, and other Office 365 user data.


Step 6: Building the UI

There are two ways to build a good UI. The first is the traditional coding approach where we write the complete code manually. The second is by using GitHub Copilot integrated in VS Code with the help of prompts.

Using GitHub Copilot:

We can generate the UI by writing a detailed prompt in GitHub Copilot. Here’s an example prompt:

Create a Personal Dashboard UI component in React with TypeScript that displays:

  1. A header section showing the current logged-in user’s profile information (name, email, job title, and profile photo) fetched from Office 365 Users connector
  2. Two main sections side by side:

– Left section: Display a list of assigned Cases (Incidents) from Dataverse

* Show case title, case number, priority, status, and created date

* Use card layout for each case

* Add loading state and error handling

– Right section: Display a list of assigned Leads from Dataverse

* Show lead name, company, topic, status, and created date

* Use card layout for each lead

* Add loading state and error handling

  1. Use modern, clean UI design with:

– Responsive layout (works on desktop and mobile)

– Tailwind CSS for styling

– Professional color scheme (blues and grays)

– Proper spacing and typography

– Loading spinners while data is fetching

– Error messages if data fails to load

After providing this prompt to GitHub Copilot, it will generate the complete component code. We can then review the generated code, make any necessary adjustments, and integrate it into our application.

Step 7: Deploy Your Code App

Once the code is complete and the app is running locally, the next step is to deploy the application. For Code Apps, deployment is straightforward. First, build the application by running:

npm run build

After a successful build, execute the following command to push the application to Power Apps:

pac code push

This command will deploy the application to Power Apps. To verify the deployment, go to the Power Apps portal and open the Apps section. The newly deployed Code App will be visible in the list as shown in the image below.

PowerAppsCode

To run the app, click the play button. On the first launch, the application will prompt for permission to access the connected data sources. After allowing the permissions, the application will use those connection references for all subsequent operations.

PowerAppsCode

 

PowerAppsCode

Conclusion

With Power Apps Code Apps, we can now build standalone applications. The real advantage here is the direct access to over 1,500 connectors through the Power Apps SDK. We can connect to Dataverse, Office 365 Users, and other services without writing any authentication code. The Power Apps SDK handles all the security, and each connector respects user level permissions automatically.

We also get complete freedom to design our own UI using any libraries we prefer. The deployment process is simple. Just run the build command and push it to Power Platform with a single command.

In this article, we built a Personal Dashboard that pulls data from Dataverse and Office 365 Users. The same approach works for any application that needs to connect with Power Platform services. The setup is straightforward, and once the project is initialized, adding new data sources is just a matter of running a few commands.

Code Apps provide a practical way to build custom applications within the Power Platform ecosystem while maintaining secure connections and proper access control.

Frequently Asked Questions (FAQs)

What are Power Apps Code Apps?

Power Apps Code Apps are a new application type in Microsoft Power Platform that allow developers to build standalone single-page applications using modern web frameworks such as React, Angular, or Vue. They provide direct access to Power Platform connectors through the Power Apps SDK without requiring custom authentication code.

How are Code Apps different from Canvas Apps and Model-Driven Apps?

Unlike Canvas Apps and Model-Driven Apps, Code Apps:

  • Are fully standalone applications
  • Use a pro-code development approach
  • Allow complete control over UI and application architecture
  • Cannot be embedded into Canvas or Model-Driven Apps
  • Use modern frontend frameworks instead of low-code designers

Do Power Apps Code Apps require authentication setup?

No. Authentication is handled automatically by the Power Apps SDK. Developers do not need to implement OAuth flows, manage tokens, or configure app registrations. All connectors enforce user-level permissions by default.

Can Power Apps Code Apps connect to Dataverse?

Yes. Power Apps Code Apps can connect directly to Dataverse using the Dataverse connector. Developers can perform CRUD operations on Dataverse tables, such as Leads and Incidents once the SDK is initialized.

How do Code Apps access Office 365 user information?

Code Apps use the Office 365 Users connector to retrieve profile details such as name, email, job title, and profile photo. The connector respects the signed-in user’s permissions automatically.

The post Building Standalone Apps with Power Apps Code Apps: Using Dataverse and Office 365 Users Connectors (Part 1) first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • Build AI-Powered Apps in Minutes with Power Apps Vibe: A Complete Guide (Preview)
    If you’ve ever tried building apps with Microsoft Power Apps, you know the process: creating tables, designing screens, adding controls, connecting data, and writing formulas. While the traditional app-building process is effective, it can also be time-consuming and complex. But now, imagine this: You simply describe the app you need, and within minutes, Power Apps Vibe takes over: A complete data model is generated. UI screens are automatically designed. Built-in logic is incorporated. A func
     

Build AI-Powered Apps in Minutes with Power Apps Vibe: A Complete Guide (Preview)

Power Apps Vibe

If you’ve ever tried building apps with Microsoft Power Apps, you know the process: creating tables, designing screens, adding controls, connecting data, and writing formulas. While the traditional app-building process is effective, it can also be time-consuming and complex.

But now, imagine this:

You simply describe the app you need, and within minutes, Power Apps Vibe takes over:

  • A complete data model is generated.
  • UI screens are automatically designed.
  • Built-in logic is incorporated.
  • A functional prototype is ready to go.

All this, without having to drag a single control or write a line of code.

Welcome to Power Apps Vibe—a revolutionary AI-powered app development platform. Unlike traditional app design methods, Power Apps Vibe makes building apps simpler, faster, and more intuitive than ever before.

Instead of spending hours designing screens and wiring logic, Vibe transforms app development into a simple, conversational experience. You describe what you need, and it creates the foundation for your app—data model, UI, navigation, and logic—automatically.

Power Apps Vibe

In this blog, I’ll break down what Vibe is, why Microsoft created it, and how you can start building full-stack apps with nothing more than a sentence.

What is Power Apps Vibe?

Power Apps Vibe is Microsoft’s AI-driven app-building experience, designed to simplify app development. Available now in preview, this feature combines the best aspects of low-code and AI-powered development into a single, seamless interface.

Unlike traditional app-building tools such as Canvas or Model-Driven apps, Vibe functions like a creative partner, helping you bring your app ideas to life faster. Here’s how it works:

  • You describe your app’s requirements in simple language.
  • Power Apps Vibe automatically creates:
    • The data model behind your app.
    • The UI screens you need.
    • Navigation and action flows.
    • The core logic for functionality.

You still have full control to modify or refine any aspect of the app. Think of Power Apps Vibe as a combination of Power Apps, an AI architect, a UI designer, and a developer, all within a single interface.

Think of it as Power Apps + a smart architect + a designer + a developer, all rolled into one interface.

Why Did Microsoft Introduce Power Apps Vibe?

The goal behind Power Apps Vibe is simple: to make app development faster, smarter, and more accessible for everyone, from business users to developers.

Organizations often face challenges such as:

  • Long development cycles
  • Lack of skilled developers
  • Difficulty translating business ideas into working apps
  • Fragmented requirements
  • Slow prototype development

Power Apps Vibe addresses these issues by enabling anyone, whether a business user, analyst, or developer, to rapidly create a solid app foundation. With Vibe, you can skip the time-consuming setup and dive straight into customizing the app for your specific needs.

We can maintain full control for customization, but the time-consuming initial setup is handled for us.

Where Do You Access Power Apps Vibe?

Currently, Power Apps Vibe is available in preview and is not yet part of the standard Power Apps studio. To get started, head over to the preview portal: Power Apps Vibe Preview

Simply sign in with your Microsoft account, and you’ll be greeted with a clean, intuitive workspace. A large prompt box will be ready for your ideas, making it easy to get started.

Power Apps Vibe

To use it, Navigate to:

🔗 https://vibe.preview.powerapps.com

Sign in with your Microsoft account, and you’ll enter a clean, streamlined workspace featuring a large prompt area—ready for your ideas.

How to Build an App Using Vibe?

Step-by-Step Guide to Building an App with Power Apps Vibe

Here’s what surprises most people:

Using Power Apps Vibe feels less like coding and more like having a conversation with a colleague. You describe what you need, and Vibe does the heavy lifting. Here’s how the process works:

Let’s walk through the complete process step by step.

Step 1: Describe the App You Want

In the prompt box, simply describe your app in plain language. You don’t need to worry about technical jargon or formatting. For example:

“I want to build a Time Entry Admin app. Admins should be able to update the Base Pay Code, view a list of time entries, and edit the Base Pay Code only on this screen.”

Power Apps Vibe

No need for complex formatting or technical jargon.
Just describe your app idea as if you were explaining it to a teammate – simple, clear, and conversational.

Step 2: Vibe Generates Your App Plan

Once you submit your prompt, Vibe analyses your requirements and generates a detailed plan. This blueprint typically includes:

  • The tables it will create
  • The fields within those tables
  • The screens your app will have
  • Actions and commands for functionality
  • Navigation flow between screens

Test Prompt:

“Create an app for managing Time Entries. The main screen should list all time entries. When I click a row, take me to a detail screen. Admins should be able to update the Base Pay Code on this screen. Non-admin users should not be able to edit this field.”

Power Apps Vibe

It’s essentially the blueprint of your app. If something doesn’t look right, you don’t need to start over – just refine your prompt. For example:

  • Add an audit field
  • Change the name of this table
  • Make Base Pay Code read-only for non-admins

Vibe instantly updates the plan based on your instructions, making the process feel conversational and effortless.

Step 3: Create the App

Once your plan looks good, simply click Create App.

Vibe now builds:

  • The user interface (UI)
  • Interactive forms
  • The underlying data model
  • Core logic for functionality

This process yields a functional web application that is available for immediate preview.

Power Apps Vibe

Vibe handles all the heavy lifting so you can focus on refining ideas instead of wrestling with syntax.

Step 4: Refine the App Through Natural Language

This is where Vibe feels different from anything we’ve seen before.

You can simply chat with it:

  • “Make the Base Pay Code field bigger.”
  • “Add a dashboard screen with totals.”
  • “Add a search bar at the top.”
  • “Show only records assigned to the logged-in user.”

And Vibe will update the app instantly.

It’s the first time Power Apps feels like a conversation instead of a tool.

Step 5: Save Your App

When you save the app for the first time, Power Apps stores:

  • the app
  • the plan
  • the screens
  • and the data model

All inside a single solution.

It becomes part of your Power Apps environment, just like any other app.

Step 6: Connect to Real Data (Optional)

When you first build the app, it uses “draft data” –  temporary tables that exist only for prototyping.

Once your app is ready for real use:

  1. Go to Data
  2. Connect to Dataverse, SQL, SharePoint, or any supported source
  3. Map the fields
  4. Publish the app again

This step turns your prototype into a production-ready application.

Step 7: Publish and Share

Once everything looks right, click Publish.

Your app becomes live, and you can share it with your team exactly like any other Power App.

Where Power Apps Vibe Really Shines

After playing with it, I realized Vibe is perfect for:

  • Rapid prototyping
  • Converting ideas into real apps within minutes
  • Building admin tools
  • Internal dashboards
  • Small line-of-business apps
  • Automating manual processes
  • Mockups for client demos
  • Reducing the back-and-forth between business teams and developers

It reduces friction. It reduces waiting. It reduces technical complexity.

You still get full control — formulas, data, actions, security, connectors — everything you normally have in Power Apps remains available.

But the start is dramatically faster.

Limitations to Keep in Mind for Power Apps Vibe

Since Vibe is still a preview feature, a few things have limitations:

  • You cannot edit Vibe apps in the classic Canvas app studio.
  • If you export/import the solution, it may break the link with the AI “plan.”
  • It currently supports creating only one app per plan.
  • Existing Dataverse tables aren’t automatically suggested during generation.
  • Some refinements still need to be done manually.

But even with these limitations, Vibe is powerful enough to start real-world projects and prototypes.

Final Thoughts

Power Apps Vibe is one of the biggest updates to the Power Platform in years.
It brings a fresh, modern, conversational style of development that feels more natural and less stressful.

Instead of spending hours designing screens and wiring logic, you can now focus on:

  • Refining ideas,
  • Improving workflows,
  • And delivering value faster.

If you haven’t tried it yet, open the preview today and type the first idea that comes to mind.
You’ll be surprised how quickly it becomes a working app.

Frequently Asked Questions: Power Apps Vibe

1. What is Power Apps Vibe and how is it different from traditional Power Apps development?

Power Apps Vibe is an AI-powered app-building tool that allows you to create full-stack apps simply by describing your requirements in natural language. Unlike traditional Power Apps, which involve manually designing screens and writing formulas, Vibe automatically generates the data model, UI, navigation, and logic. It simplifies app development by transforming it into a conversational, automated process.

2. Can I use Power Apps Vibe without any coding knowledge?

Yes, Power Apps Vibe is designed for users with little or no coding experience. It allows you to create apps by simply describing what you want in plain language. The AI handles the complex aspects of app development, such as data modeling, UI design, and logic, so you can focus on refining your ideas rather than writing code.

3. Is Power Apps Vibe available for all users or only those in certain regions?

Currently, Power Apps Vibe is in preview and can be accessed by users who sign in through the dedicated portal at https://vibe.preview.powerapps.com. While the feature is available globally, its availability might vary based on regional preview settings and Microsoft’s rollout timeline. Keep an eye on updates for broader access.

4. What are some limitations of Power Apps Vibe?

While Power Apps Vibe is a powerful tool, it does have some limitations:

  • You cannot edit Vibe-generated apps in the classic Canvas App Studio.
  • The feature currently supports only one app per plan.
  • Existing Dataverse tables aren’t automatically suggested during the app creation process.
  • Some refinements still require manual adjustments after the initial app is generated.

5. How can I connect my Power Apps Vibe app to real data?

Once your prototype is ready, you can connect your Power Apps Vibe app to real data by navigating to the Data section within Power Apps and linking it to supported data sources such as Dataverse, SQL, or SharePoint. After mapping the fields, you can publish the app again to make it production-ready.

The post Build AI-Powered Apps in Minutes with Power Apps Vibe: A Complete Guide (Preview) first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • How Copilot Studio Leverages Deep Reasoning for Intelligent Support Operations
    Deep Reasoning in Microsoft Copilot Studio enables AI agents to analyze multi-step support scenarios, evaluate historical case data, apply business rules, and recommend well-reasoned actions similar to how an experienced support specialist thinks. AI agents are becoming a core part of customer service operations, but traditional conversational models often struggle when scenarios become complex, like diagnosing a multi-step issue, understanding multi-turn case histories, or recommending the nex
     

How Copilot Studio Leverages Deep Reasoning for Intelligent Support Operations

CopilotStudio

Deep Reasoning in Microsoft Copilot Studio enables AI agents to analyze multi-step support scenarios, evaluate historical case data, apply business rules, and recommend well-reasoned actions similar to how an experienced support specialist thinks.

AI agents are becoming a core part of customer service operations, but traditional conversational models often struggle when scenarios become complex, like diagnosing a multi-step issue, understanding multi-turn case histories, or recommending the next best action.
Microsoft’s new Deep Reasoning capability in Copilot Studio (currently in preview) bridges this gap by enabling agents to think more logically and deliver more accurate conclusions.

This feature equips Copilot agents with advanced analytical abilities similar to how a skilled support specialist breaks down a problem, evaluates evidence, and suggests well-reasoned actions.

How Deep Reasoning Works

Deep reasoning is powered by an advanced Azure OpenAI model (o3), optimized for:

  • Multi-step thinking
  • Logical deduction
  • Complex problem solving
  • Chain-of-thought analysis
  • Context comprehension across long conversations

When enabled, the agent automatically decides when to invoke the deep reasoning model, especially during:

  • Complicated queries
  • Multi-turn conversations
  • Tasks requiring decision making
  • Summaries of large case files
  • Applying business rules

Alternatively, you can instruct the agent to explicitly use deep reasoning by including the keyword “reason” in your agent instructions.

Business Use Case:

Imagine a company that manages thousands of service cases, technical issues, warranty requests, customer complaints, and product inquiries.
Handling these efficiently requires deep understanding of:

  • Historical case data
  • Case descriptions across multiple interactions
  • Dependencies (products, warranties, previous repairs, SLAs)
  • Business rules
  • Customer communication patterns

A standard AI model can answer simple questions, but when a customer or sales representative asks something like:

  • Why was this customer’s case reopened three times?
  • Given the reported symptoms and past activity, what should be the next troubleshooting step?
  • Which SLA should be applied in this situation, and what is the reasoning behind it?
  • Considering the notes from all three departments, what appears to be the underlying root cause?

Your agent needs more than a direct lookup.
It needs reasoning.

This is where Deep Reasoning dramatically improves the experience.

How to Enable Deep Reasoning in Copilot Studio (Step-by-Step)

Setting up deep reasoning in a Copilot Studio agent is straightforward:

Step 1. Enable generative orchestration

This allows the agent to decide intelligently which model should handle each part of the conversation.

Step 2. Turn on Deep Reasoning

When enabled, the o3 model is added to the agent’s orchestration pipeline.

CopilotStudio

Step 3. Add the reason keyword (optional but recommended)

Inside the Agent Instructions, specify where deep reasoning should be applied:

As mentioned in the screenshot below, the word “reason” is used twice to trigger deep reasoning in our custom agent.

CopilotStudio

Step 4. Connect data sources

You can link multiple sources such as:

  • Dataverse Cases table
  • Knowledge bases
  • SharePoint documents
  • Product manuals
  • Troubleshooting guides

Deep reasoning enables the agent to interpret and analyze these materials more effectively.
For this example, I connected a Dataverse MCP server to provide the agent with improved access to Dataverse tables.

CopilotStudio

Step 5. Test complex scenarios

Ask real-world questions like:

  • Analyze the case history and determine the most likely root cause.
  • Based on the customer’s issue description, what steps should the technician take next?
  • Explain why this case breached SLA.

You will notice the agent provides a structured, logical answer rather than surface-level information.

CopilotStudio

You can also verify that deep reasoning was activated by checking the Activity section.

CopilotStudio

Frequently Asked Questions About Deep Reasoning in Copilot Studio

What model powers Deep Reasoning in Copilot Studio?
Deep Reasoning is powered by the Azure OpenAI o3 reasoning model, optimized for multi-step analysis and logical deduction.

When should Deep Reasoning be used?
It should be applied to complex, multi-turn conversations involving business rules, SLAs, historical data, or decision-making.

Does Deep Reasoning replace standard Copilot responses?
No. Copilot Studio dynamically decides when Deep Reasoning is required, using standard models for simpler interactions.

Can Deep Reasoning analyze large case histories?
Yes. It is specifically designed to interpret long conversations and large volumes of contextual data.

Conclusion

By connecting rich data sources and enabling deep reasoning, the agent becomes significantly more capable of understanding complex case scenarios and providing meaningful, actionable responses. When tested with real-world questions, the agent demonstrates structured analysis, logical decision-making, and deeper insights rather than surface-level replies.

This ensures more accurate case resolutions, improved productivity, and a smarter, more reliable support experience.

The post How Copilot Studio Leverages Deep Reasoning for Intelligent Support Operations first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • How to Download Large Files from Dynamics 365 CRM Using BlocksDownloadRequest API
    Introduction When working with large files in Microsoft Dataverse (Dynamics 365 CRM), standard download methods often fail due to payload size limits, network interruptions, or memory overload. To address these challenges, Dataverse provides a chunked, block-based download mechanism through APIs such as: InitializeFileBlocksDownloadRequest InitializeAttachmentBlocksDownloadRequest InitializeAnnotationBlocksDownloadRequest This method is the recommended and most reliable way to download large
     

How to Download Large Files from Dynamics 365 CRM Using BlocksDownloadRequest API

How to Download Large Files from Dynamics 365 CRM Using BlocksDownloadRequest API

Introduction

When working with large files in Microsoft Dataverse (Dynamics 365 CRM), standard download methods often fail due to payload size limits, network interruptions, or memory overload. To address these challenges, Dataverse provides a chunked, block-based download mechanism through APIs such as:

  • InitializeFileBlocksDownloadRequest
  • InitializeAttachmentBlocksDownloadRequest
  • InitializeAnnotationBlocksDownloadRequest

This method is the recommended and most reliable way to download large files in Dynamics 365.

Why Use Chunked Download Requests?

Common challenges with large file downloads:
• Timeouts or payload size limits
• Unstable or slow networks (especially in mobile/VPN environments)
• Memory overload when downloading full files at once

To overcome these, Dataverse supports block-based downloads. These requests initialize the operation and return a continuation token and file metadata, enabling files to be retrieved in chunks.

Benefits of this approach include:
• Reliable, resumable downloads
• Optimized memory and bandwidth usage
• Scalable for mobile apps, portals, and external systems

Available Chunked Download Requests

  • InitializeFileBlocksDownloadRequest – For files stored in File or Image columns.
    • InitializeAttachmentBlocksDownloadRequest – For email attachments in the ActivityMimeAttachment table.
    • InitializeAnnotationBlocksDownloadRequest – For note attachments stored in the Annotation table.

How to Use Chunked Download Requests

The download process consists of three main steps:
1. Initialize the download request
2. Retrieve file blocks using DownloadBlockRequest
3. Assemble or save the file locally

Example 1: Downloading File Column Data

C# Code Sample:

var initRequest = new InitializeFileBlocksDownloadRequest
{
Target = new EntityReference("incident", incidentId),
FileAttributeName = "reportfile"
};

var initResponse = (InitializeFileBlocksDownloadResponse)service.Execute(initRequest);
var token = initResponse.FileContinuationToken;
var fileName = initResponse.FileName;
var fileSize = initResponse.FileSizeInBytes;

long offset = 0;
long blockSize = 4 * 1024 * 1024;
var fileBytes = new List<byte>();

while (offset < fileSize)
{
var downloadRequest = new DownloadBlockRequest
{
FileContinuationToken = token,
Offset = offset,
BlockLength = blockSize
};

var downloadResponse = (DownloadBlockResponse)service.Execute(downloadRequest);
fileBytes.AddRange(downloadResponse.Data);
offset += downloadResponse.Data.Length;
}

File.WriteAllBytes($"C:\\DownloadedReports\\{fileName}", fileBytes.ToArray());

Example 2: Downloading Email Attachments

To download an email attachment:

var initRequest = new InitializeAttachmentBlocksDownloadRequest
{
Target = new EntityReference("activitymimeattachment", attachmentId)
};

var initResponse = (InitializeAttachmentBlocksDownloadResponse)service.Execute(initRequest);
var token = initResponse.FileContinuationToken;
var fileSize = initResponse.FileSizeInBytes;

long offset = 0;
long blockSize = 4 * 1024 * 1024;
var fileBytes = new List<byte>();

while (offset < fileSize)
{
var downloadRequest = new DownloadBlockRequest
{
FileContinuationToken = token,
Offset = offset,
BlockLength = blockSize
};

var downloadResponse = (DownloadBlockResponse)service.Execute(downloadRequest);
fileBytes.AddRange(downloadResponse.Data);
offset += downloadResponse.Data.Length;
}

Example 3: Downloading Note Attachments

To download a note file from annotation: var initRequest = new InitializeAnnotationBlocksDownloadRequest { Target = new EntityReference(“annotation”, noteId) }; var initResponse = (InitializeAnnotationBlocksDownloadResponse)service.Execute(initRequest); var token = initResponse.FileContinuationToken; var fileSize = initResponse.FileSizeInBytes; long offset = 0; long blockSize = 4 * 1024 * 1024; var fileBytes = new List<byte>(); while (offset < fileSize) { var downloadRequest = new DownloadBlockRequest { FileContinuationToken = token, Offset = offset, BlockLength = blockSize }; var downloadResponse = (DownloadBlockResponse)service.Execute(downloadRequest); fileBytes.AddRange(downloadResponse.Data); offset += downloadResponse.Data.Length; }

Real-World Example: Case Attachments in Customer Support

Scenario:
Customer support agents frequently upload large evidence files into a Dataverse file column, such as high-resolution screenshots, diagnostic logs, product failure images, or customer-submitted recordings. These files often range from 10 MB to over 100 MB, especially when dealing with technical issues or multimedia evidence.

Challenge:
Using standard download methods often leads to:
• Browser timeouts due to file size
• Failed downloads for VPN/home-office users
• Performance issues when loading large files into memory
• Problems for Power Pages or portal users with unstable network conditions

Solution:
By using InitializeFileBlocksDownloadRequest, the system downloads large attachments in safe, resumable chunks (typically 4 MB each). If the network drops or a chunk fails, only that block is retried not the entire file.

Result:
• Escalation teams can download case evidence without interruption
• Remote and field technicians experience reliable downloads even on hotspot connections
• Large multimedia files no longer freeze or crash the application
• Faster resolution times and improved SLA performance

Conclusion

These chunked download requests offer a scalable, performant, and resilient way to retrieve large files from Dynamics 365 Dataverse. Whether working with file columns, email attachments, or notes, using block-based download logic ensures optimal handling of high-volume content in business-critical applications.

FAQ

1. Can I download files larger than 100MB using this method?

Yes. Block-based download supports very large files.

2. What is the recommended block size?

4 MB per Microsoft guidance.

3. Does chunked download work for Power Apps and external apps?

Yes, as long as the app uses the Dataverse Web API or SDK.

4. Can I resume a failed download?

Yes, you can retry the failed chunk because progress is tracked by offset.

The post How to Download Large Files from Dynamics 365 CRM Using BlocksDownloadRequest API first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • How to Automate Image Descriptions with AI Builder in Power Automate
    In today’s fast-paced digital world, automating repetitive tasks not only saves time but also significantly improves productivity. Microsoft now offers a powerful preview AI model that enables automatic generation of image descriptions using the AI Builder’s prebuilt Image Description model in Power Automate Flow. This smart tool analyzes your images and generates easy-to-understand, meaningful descriptions. These descriptions are really helpful for organizing your files, sorting images, and ma
     

How to Automate Image Descriptions with AI Builder in Power Automate

Automate Image Descriptions with AI Builder in Power Automate

In today’s fast-paced digital world, automating repetitive tasks not only saves time but also significantly improves productivity. Microsoft now offers a powerful preview AI model that enables automatic generation of image descriptions using the AI Builder’s prebuilt Image Description model in Power Automate Flow.

This smart tool analyzes your images and generates easy-to-understand, meaningful descriptions. These descriptions are really helpful for organizing your files, sorting images, and making content more accessible, all without you having to do anything manually.

How the AI Builder Image Description Model Works

The AI Builder Image Description model uses advanced computer vision to understand what’s in an image and convert that visual content into meaningful text. This makes tasks like organizing files, generating metadata, creating reports, or improving accessibility much easier because the system provides clear descriptions instantly and automatically.

Here’s a breakdown of how the model works behind the scenes:

1. It analyzes the image and generates three key outputs:

  • A description (English only): A simple, human-readable explanation of what the image contains.
  • Tags: Keywords that highlight the main objects, themes, or concepts detected in the image.
  • A confidence score: A percentage indicating how certain the model is about its description.

2. It supports only these image file formats:

.JPG, .JPEG, .PNG, and .BMP
Uploading any other format will cause the action to fail.

3. Image size requirements:

    • Maximum file size: 4 MB
    • Minimum resolution: 50 × 50 pixels

4. Role and licensing requirement:

You only need the Basic User role to use this model inside a Power Automate flow with no special admin permissions required.

Important note:
This feature is currently in preview, which means it works reliably for simple descriptions but is not recommended for production use yet.

Prerequisites:

Before creating a flow using this model, ensure you have:

  • Access to Microsoft Power Automate
  • Access to Dataverse

Step-by-Step Guide: Automate Image Descriptions

1. Create a New Flow

  • Sign in to your Dataverse
  • Open Power Automate, and click Create → Instant cloud flow (or choose another flow type based on your use case).

Automate Image Descriptions with AI Builder in Power Automate

2. Add File Input (Optional)

  • In the trigger step, click Add an input → Select File.

Automate Image Descriptions with AI Builder in Power Automate

  • This allows you to upload an image manually when testing the flow.

3. Add the AI Builder Action

  • Click New Step → Search for “AI Builder” → Select “Describe images (Preview)”.

Automate Image Descriptions with AI Builder in Power Automate

  • In the Image field, choose File Content from Dynamic Content.

Automate Image Descriptions with AI Builder in Power Automate

4. Add Post-Processing Logic (Optional)

  • Click once the description is generated, you can add further steps such as:
    • Storing the description in a database.
    • Sending an email notification.

Automate Image Descriptions with AI Builder in Power Automate

5. Save and Test the Flow

  • Click Save.
  • Select Test → Manually, and upload an image when prompted.
  • The flow will run and automatically generate a description, confidence score, and related Tags.

Automate Image Descriptions with AI Builder in Power Automate

Automate Image Descriptions with AI Builder in Power Automate

Important Consideration (as per Microsoft docs):
This AI Builder currently supports only the English language and the following image formats only: Jpeg, Png, Gif, Bmp. Uploading other file types will result in a failed operation.

This feature is still in preview and currently provides straightforward descriptions. However, in the future, it has the potential to generate more detailed and complex image descriptions.

When Should You Use the Image Description Model?
A common use case is when a user uploads product images to your system. Automatically generating a description helps:

  • Product Image Metadata: Automatically generate captions for product images in catalogs.
  • Accessibility: Provide alt-text for images on websites and in documents.
  • Content Tagging: Tag images with relevant labels for smarter search.
  • Surveillance or Monitoring: Describe visual scenes (people, objects, activity) for easier review or alerts.

FAQs

  1. How does the AI Builder Image Description model help my workflow?
    It automatically scans your images and generates clear, meaningful descriptions along with tags and confidence scores. This means faster content organization, better accessibility, and zero manual effort.
  2. What image formats can I upload?
    The model currently supports JPG, JPEG, PNG, and BMP files. Using any other format will stop the flow from running successfully.
  3. Are there any image size limits I should know about?
    Yes, your image must be under 4 MB and at least 50 × 50 pixels. Staying within these limits ensures smooth processing.
  4. Does it work with multiple languages?
    For now, it generates descriptions only in English, as highlighted in the blog.
  5. Is this ready for production use?
    Not yet. Since the feature is still in preview, it’s ideal for testing, prototyping, and internal automation but not for mission-critical production scenarios.
  6. Do I need admin rights to use this in my Power Automate flow?
    No. The Basic User role is all you need to start using the Image Description model, making it easy for anyone in your team to adopt.

Conclusion:
Power Automate’s Image Description prebuilt model makes it effortless to generate meaningful image descriptions — without writing a single line of code. Whether it’s for improving accessibility or automating content organization, this tool empowers you to streamline processes, save time, and increase efficiency.

The post How to Automate Image Descriptions with AI Builder in Power Automate first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • A Practical Guide to Background Operations and Callback URL in Dynamics 365: Part 2
    Handling large-scale data tasks in Dynamics 365 CRM can be challenging, especially when syncing thousands of records with external systems. Background Operations allow these resource-intensive tasks to run asynchronously, keeping the system responsive. In this article, we’ll walk through the technical setup using a practical scenario: syncing thousands of records with an external system. You’ll learn how to create a Background Operation and use a Callback URL to get notified or trigger other pr
     

A Practical Guide to Background Operations and Callback URL in Dynamics 365: Part 2

Operations and Callback URL in Dynamics 365

Handling large-scale data tasks in Dynamics 365 CRM can be challenging, especially when syncing thousands of records with external systems. Background Operations allow these resource-intensive tasks to run asynchronously, keeping the system responsive.

In this article, we’ll walk through the technical setup using a practical scenario: syncing thousands of records with an external system. You’ll learn how to create a Background Operation and use a Callback URL to get notified or trigger other processes automatically once the job completes.

For insights into synchronous vs asynchronous workflows and why Background Operations are essential for large data sets, refer to Part 1.

The Scenario: Syncing Records with External System

Many organizations use both Dynamics 365 CRM and an external ERP system, requiring regular synchronization of all customer and order data. This includes not only the records themselves but also related data in the external system.

Using a Background Operation allows this process to run asynchronously, without affecting other running processes. It also supports a Callback URL, which notifies the administrator automatically once the operation is completed, eliminating the need for manual monitoring.

With a Background Operation, all syncing logic can be consolidated into a single request, allowing Dynamics 365 to run the large job in the background without disturbing users, while automatically informing the system or administrators when the task is finished.

Technical Setup: Running Background Operations Asynchronously

To implement this in Dynamics 365, a Custom API containing the sync logic must be created and triggered via the ExecuteBackgroundOperation request.

Step 1: Create a Custom API

Create a Custom API in Dynamics 365 called SyncRecordsToExternalSystem. Back it with a plugin, which handles:

  • Querying Dynamics 365 records
  • Sending data to the external system via API
  • Handling responses and updating sync status flags

This plugin runs in the background, removing timeout limitations and efficiently processing thousands of records.

Step 2: Trigger the Background Operation

Next, you need a way to call your new Custom API. The key is that instead of executing your Custom API directly, you wrap it in an ExecuteBackgroundOperation request.

This tells Dynamics 365 to take your request (the asyncRequest) and run it as a background job, giving you back a BackgroundOperationId to track it.

Here is a C# code snippet, similar to the one from Part 1, but this time we are calling our new Custom API. This code would typically run in another plugin (e.g., on a scheduled job or button click).

public void Execute(IServiceProvider serviceProvider)
{

// Services

IPluginExecutionContext context =

(IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

 

ITracingService tracingService =

(ITracingService)serviceProvider.GetService(typeof(ITracingService));

 

IOrganizationServiceFactory serviceFactory =

(IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));

 

IOrganizationService service =

serviceFactory.CreateOrganizationService(context.UserId);

 

try

{

tracingService.Trace("Starting background sync operation...");

 

// 1. Create the request for your Custom API that syncs records.

var asyncRequest = new OrganizationRequest("sample_SyncRecordsToExternalSystem")

{

Parameters =

{

// You can pass parameters to your logic,

{"EntityName", "account"},

{"SyncMode", "Incremental"},

{"ExternalSystemUrl", "https://api.erpsystem.com/v1/customers"},

{"LastSyncDate", DateTime.UtcNow.AddDays(-1)}

}

};

 

// 2. Create the request to execute your Custom API in the background.

var request = new OrganizationRequest("ExecuteBackgroundOperation")

{

Parameters =

{

{"Request", asyncRequest },

 

// 3. Request a callback. This is a Power Automate Flow URL.

// This flow will be triggered when the job is done.

{"CallbackUri", "https://prod-123.westeurope.logic.azure.com/workflows/..." }

}

};

 

// Execute the background operation request

var response = service.Execute(request);

 

// This ID lets you monitor the job in the "Background Operations" view

tracingService.Trace($"BackgroundOperationId: {response["BackgroundOperationId"]}");

tracingService.Trace($"Location: {response["Location"]}");

}

catch (Exception ex)

{

tracingService.Trace($"Exception: {ex.Message}");

throw new InvalidPluginExecutionException("Sync background job failed.", ex);

}

}

Step 3: Monitor the Background Operation

You can monitor the progress under Advanced Settings → Settings → System Jobs → Background Operations, where you can view status, duration, parameters, and error logs easily.

Step 4: Handle the Completion with the Callback URL

This is where the magic happens. Your duplicate job might take 30 minutes, 4 hours, or even a whole day. You don’t want to sit and watch the “System Jobs” screen.

The CallbackUrl you provided (e.g., the URL for a Power Automate HTTP-triggered flow) will be called automatically by Dynamics 365 the moment the operation finishes.

Your Power Automate flow can then:

  • Parse the response from the job.
  • Check if it ‘Succeeded’ or ‘Failed’.
  • Send an email to the CRM Administrator with a summary (“Sync job complete. 2,456 records successfully synced to external system.”).
  • Log the completion details for auditing.
  • Update a sync status dashboard or field.
  • Trigger error handling workflows if the sync failed.
  • Schedule the next incremental sync.

Conclusion

Handling large-scale integration operations can be challenging, but Dynamics 365 provides the right tools to manage them efficiently. By combining Custom APIs with the ExecuteBackgroundOperation message, you can safely run heavy, resource-intensive tasks like external system synchronization without affecting users or system performance.

With the CallbackUrl, Dynamics 365 automatically notifies you once the background job is completed. It can trigger actions like sending a summary email, logging results, updating dashboards, or scheduling the next sync, removing the need for manual monitoring.

FAQs: Dynamics 365 Background Operations & Callback URL

What are Background Operations in CRM?

Background Operations in Dynamics 365 CRM are asynchronous processes that run in the background, allowing long-running or resource-intensive tasks to execute without slowing down the system. They are ideal for bulk updates, data migration, or integration with external systems.

How does a Callback URL work in CRM?

A Callback URL is an endpoint (like a Power Automate flow, webhook, or API) that Dynamics 365 CRM automatically calls when a Background Operation completes. It enables automatic notifications, follow-up workflows, dashboard updates, or error handling.

Why are asynchronous workflows better for large data sets in Dynamics 365?

Asynchronous workflows, such as Background Operations, prevent system slowdowns or timeout errors when processing thousands of records. Unlike synchronous workflows, they allow users to continue working while large data operations run in the background.

Can Background Operations sync data with external systems?

Yes. Background Operations in Dynamics 365 CRM can execute custom APIs to synchronize records with external ERP or other third-party systems efficiently, ensuring data consistency and minimal disruption to users.

How can administrators monitor Background Operations?

Administrators can monitor Background Operations under Advanced Settings → System Jobs → Background Operations, checking status, parameters, duration, and error logs for each job.

The post A Practical Guide to Background Operations and Callback URL in Dynamics 365: Part 2 first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • How to Monitor Power Platform Resources Using Alerts
    Overview Managing a large-scale Microsoft Power Platform environment can be challenging, especially when it involves multiple Dynamics 365 CRM applications used across sales, service, and marketing teams. Performance issues such as slow loading, app crashes, or access failures can often go unnoticed until they escalate, impacting productivity and user experience. To address this challenge, the Alerts feature in the Power Platform Admin Center provides a proactive way to monitor resource health
     

How to Monitor Power Platform Resources Using Alerts

How to Monitor Power Platform Resources Using Alerts

Overview

Managing a large-scale Microsoft Power Platform environment can be challenging, especially when it involves multiple Dynamics 365 CRM applications used across sales, service, and marketing teams. Performance issues such as slow loading, app crashes, or access failures can often go unnoticed until they escalate, impacting productivity and user experience.

To address this challenge, the Alerts feature in the Power Platform Admin Center provides a proactive way to monitor resource health and application performance. By setting up alerts, administrators can detect and resolve issues before they affect end users, ensuring higher uptime and smoother operations.

Business scenario

A global organization leveraging Dynamics 365 CRM for day-to-day operations faced recurring issues where users encountered delays or errors while opening model-driven apps. Often, these incidents went unreported until they disrupted workflows and caused frustration among teams.

After implementing Power Platform Alerts, administrators began receiving immediate notifications whenever the app’s success rate dropped below 100%. These alerts helped identify potential issues such as connection failures, permission errors, or licensing conflicts, well before users experienced any disruptions.

This proactive monitoring drastically improved reliability, minimized downtime, and enhanced user confidence across the organization.

How Power Platform Alerts work?

Prerequisites

To configure and manage Power Platform Alerts:

  • The admin must have Tenant or Environment Administrator
  • The target environment must be of Managed Type.

Step-by-step guide to Setting Up Alerts

Step 1: Access the Power Platform Admin Center

Open the Power Platform Admin Center and sign in with the appropriate administrator credentials.

Monitoring Your Power Platform Resources with Alerts

Step 2: Navigate to the Monitor Section

From the left-hand navigation pane, select Monitor.

Monitoring Your Power Platform Resources with Alerts

Step 3: Open the Alerts Dashboard

Within the Monitor section, click on Alerts to access the alerts dashboard.

Monitoring Your Power Platform Resources with Alerts

Step 4: Create a New Alert Rule

Click + Alert rule to create a new alert.
A configuration panel will appear where administrators can define the alert’s parameters.

Monitoring Your Power Platform Resources with Alerts

Monitoring Your Power Platform Resources with Alerts

Step 5: Configure the Alert Details

  • Name: Enter a clear, descriptive name for the alert.
  • Entity to Track: Choose whether to monitor a Power Automate flow or a Power App.
  • Scope: Set the scope to Environment and select the target environment.
  • Metric and Condition: Select a metric to monitor and define conditions (e.g., “is equal to” or threshold-based triggers).
  • Severity: Choose the alert level to Low, Medium, or High.
  • Notification Type: Decide whether to receive alerts via Email or view them only in the Admin Center.

Once saved, the system automatically evaluates current data to determine if any alerts should be triggered.

Monitoring Your Power Platform Resources with Alerts
Step 6: Define Metrics, Conditions, and Notifications

Select the metric to be monitored and set the trigger condition, such as “is equal to” a specific value or threshold. Next, choose the severity level for the alert, i.e., to Low, Medium, or High for indicating its importance.

Finally, decide how the alert should be delivered:

  • Email Notification: Sends the alert directly to the configured administrator’s email.
  • Admin Center Notification: Displays the alert within the Power Platform Admin Center dashboard.

Once configured, the system will automatically evaluate the defined conditions and trigger alerts whenever the monitored metric meets or exceeds the set threshold.

Monitoring Your Power Platform Resources with Alerts

Step 7: Save and Trigger the Alert

Click Save to activate the alert rule. Once saved, the system automatically checks the current data to determine if any alerts need to be triggered.
If the configured conditions are met, the alert will appear on the dashboard according to the defined parameters.

Monitoring Your Power Platform Resources with Alerts

Step 8: Receive Email Notifications (If Configured)

If the email notification option was selected while setting up the alert rule, an alert message will be sent to the administrator’s registered email address whenever the rule is triggered.
This ensures that admins are promptly informed of any performance issues or failures, even without logging into the Admin Center.

Step 9: Access the Triggered Alert in the Admin Center

Click Open Triggered Alert in the email or notification panel to navigate directly to the Power Platform Admin Center.
This link provides a quick way to review the triggered alert and perform further diagnostics.

Monitoring Your Power Platform Resources with Alerts

Step 10: Review Detailed Alert Information

Within the Admin Center, detailed information about the triggered alert can be viewed, including the specific model-driven app, alert type, timestamp, and record owner.
This detailed insight allows administrators to identify the root cause quickly and take corrective actions to restore normal performance.

Monitoring Your Power Platform Resources with Alerts

FAQs

  1. What are Power Platform Alerts?
    Power Platform Alerts are automated notifications that help administrators monitor the performance, availability, and health of Power Platform components such as Power Apps, Power Automate flows, and environments.
  2. Who can configure Alerts in Power Platform?
    Only users with Tenant Administrator or Environment Administrator roles can configure and manage alerts. The environment must also be of Managed Type to enable monitoring.
  3. Can alerts be configured for specific apps or flows?
    Yes. Administrators can choose specific model-driven apps, canvas apps, or flows to monitor by defining their metrics and conditions during alert configuration.
  4. How are alerts delivered to admins?
    Alerts can either appear in the Power Platform Admin Center or be delivered via email notifications, depending on the settings chosen during configuration.
  5. What happens when an alert is triggered?
    When a configured metric meets the specified condition such as a drop in app success rate, the alert is triggered. Admins can then view detailed information, including the impacted app, issue type, and owner, to take quick corrective actions.

Conclusion

Implementing Power Platform Alerts has transformed the way administrators manage and monitor Dynamics 365 CRM environments. What was once a reactive and manual process has evolved into a proactive, automated monitoring system that enhances visibility and control.

By receiving real-time notifications on performance issues, organizations can act swiftly to prevent disruptions, reduce downtime, and maintain user trust. Power Platform Alerts not only strengthen operational efficiency but also ensure that mission-critical applications remain reliable and responsive at all times.

The post How to Monitor Power Platform Resources Using Alerts first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

Step-by-Step Guide: Implementing the Power Pages Summary Component with Dataverse Tables

Power Pages Summary Component with Dataverse Tables

Overview

Microsoft Power Pages continues to evolve as a powerful platform for building secure, low-code, data-driven websites. One of its latest additions, the Summary Component, brings the power of AI summarization directly into portals.

The Summary Component allows developers and makers to automatically generate short, readable summaries from Dataverse data using the Power Pages Web API. This feature helps users quickly understand patterns, trends, and key details without navigating through individual records.

This blog explains the implementation of the Summary Component for the Lead table in the Power Pages portal to summarize key fields such as Full Name, Creation Date, Annual Revenue, Subject, Company Name, Email, and Telephone Number.

Business Use Case

The goal of this implementation is to provide sales managers and team members with a quick overview of lead information directly from the portal without requiring them to open each record.

Traditionally, reviewing leads involves scanning through a detailed list of entries, which can be time-consuming. The new Summary Component solves this by generating a concise, AI-based paragraph summarizing all relevant leads.

Example: Instead of reading a table with multiple columns, the component can generate a statement like:

“In the last month, five new leads were created, including John Carter from Contoso Ltd. and Priya Mehta from Bluewave Technologies, both showing strong revenue potential.”

This not only saves time but also provides instant insight into the business pipeline.

Step-by-Step Implementation of Summary Component

The following steps outline the implementation of Power Pages Design Studio:

Step 1: Open Power Pages Design Studio

Open the Power Pages Design Studio and navigate to the page where the summary needs to appear.

Step 2: Add the Summary Component

In the selected section, click + More Options → Components → Connected to data → Summary.

Power Pages Summary Component with Dataverse Tables

Step 3: Configure the Component

In the configuration panel, fill in the details as follows:

  • Title: Lead Summary Overview
  • Summarization API: leads?$select=fullname,subject,companyname,emailaddress1,telephone1,createdon,revenue
  • Additional Instructions:
    “Provide a clear and concise summary highlighting the lead’s name, company, contact details, and the purpose or topic of the lead. Identify any patterns, urgency indicators, or follow-up requirements based on the creation date.”
  • Keep Summary Expanded: Enabled (This will keep the summary expandable by default when the user visits the portal)

Power Pages Summary Component with Dataverse Tables

This configuration connects the component to the Lead table via the Power Pages Web API and instructs it to summarize the specified fields.

Configuration Settings

Before the Summary Component can retrieve data, permissions and secure access must be configured through the portal.

  1. Enable Web API for the Lead Table

Go to Power Pages Management → Site Settings → + New, and add the following key-value pairs:

  • Name: Webapi/lead/enabled
    Value:
    true
  • Name: Webapi/lead/fields
    Value:
    * (to allow access to all fields) or specify individual fields as fieldlogicalname1, fieldlogicalname2, …

This explicitly grants Web API access for the Lead table in the Power Pages portal.

Additionally, verify that the setting Summarization/Data/Enable is set to true.
If this setting does not exist, create a new record with that name and set its value to true.

Power Pages Summary Component with Dataverse Tables

2. Create Table Permissions

In Power Pages → Security → Table Permissions:

  • Create a new permission record with:
    • Name: All Leads or Lead Read Permission
    • Table: Lead
    • Access Type: Global access
    • Permission: Read
  • Assign this permission to the Authenticated Users web role.

Power Pages Summary Component with Dataverse Tables

Without this, data access via the Web API will fail with an error message:

Something went wrong, please try again later.” error.

Working

Once the configuration is complete, publish the site and test the component.

The Summary Component will automatically connect to Dataverse, retrieve lead data, and generate a short summary paragraph that dynamically updates as new records are created or modified.

Power Pages Summary Component with Dataverse Tables

The output proved that the Web API connection and summarization logic were functioning correctly. The results dynamically update as new leads are added or existing records change in Dataverse.

Styling the Summary Component

The appearance of the Summary Component can be customized to align with the Power Pages portal theme. Styles such as borders, background colors, shadows, and other visual effects can be applied to ensure seamless integration with the overall site design.

Power Pages Summary Component with Dataverse Tables

FAQs

  1. What is the Summary Component in Power Pages?
    The Summary Component is an AI-powered feature in Microsoft Power Pages that uses natural language generation to summarize data from Dataverse tables, helping users understand key insights quickly.
  2. Can I use the Summary Component for any Dataverse table?
    Yes. It can be connected to any table with Web API access enabled. Just update the summarization API query and permissions accordingly.
  3. Do I need to enable any specific settings before using the Summary Component?
    Yes. Web API access must be enabled for the target table (e.g., Lead) and ensure the Summarization/Data/Enable site setting is set to true. Also, create the appropriate Table Permissions for the portal users.
  4. Does the Summary Component automatically refresh when data changes in Dataverse?
    Yes. Once configured, the summary updates dynamically whenever the underlying Dataverse records are modified or new data is added.
  5. Can I style or customize the Summary Component UI?
    Absolutely. The component’s appearance can be adjusted using custom CSS to align it with their Power Pages theme for a consistent visual experience.

Conclusion

The Summary Component in Power Pages is a game-changer for presenting Dataverse data in a meaningful, AI-driven format. By implementing it for the Lead table, sales teams gain quick, automated insights which resulted in saving time, improving decision-making, and enhancing user experience.

With minimal configuration enabling Web API, creating table permissions, and defining a summarization query the component delivers a seamless experience that transforms raw data into concise insights.

 

The post Step-by-Step Guide: Implementing the Power Pages Summary Component with Dataverse Tables first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

❌
❌