Power Up - Upskill Yourself...

Normal view

Before yesterdayMain stream
  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • How to restrict Unwanted Power Automate flow execution
    In Microsoft Dataverse, Power Automate flows are commonly used to execute business logic when records are created, updated, or deleted. They work well for most user-driven and real-time business operations. However, in certain scenarios such as integrations, background jobs, bulk data operations, or system maintenance tasks running these flows is not always required and can negatively impact performance or cause unintended automation triggers. To address this, Microsoft provides a way to bypass
     

How to restrict Unwanted Power Automate flow execution

Power Automate

In Microsoft Dataverse, Power Automate flows are commonly used to execute business logic when records are created, updated, or deleted. They work well for most user-driven and real-time business operations.

However, in certain scenarios such as integrations, background jobs, bulk data operations, or system maintenance tasks running these flows is not always required and can negatively impact performance or cause unintended automation triggers.

To address this, Microsoft provides a way to bypass Power Automate flow execution when performing operations through the Dataverse SDK. This allows developers to update or delete records without triggering associated flows, giving greater control over when automation should or should not run.

In this blog, we’ll explore when and why bypassing Power Automate flows makes sense, how it works at a technical level, and what to keep in mind before using it in production environments.

Why Bypass Power Automate Flows?

Bypassing flows is useful when the operation is system-driven and the flow logic is not needed.

Some common reasons include:

  • Avoiding unnecessary flow execution during background operations
  • Improving performance during bulk updates or migrations
  • Preventing flows from triggering repeatedly or causing loops
  • Keeping business automation separate from technical or maintenance logic

This approach ensures that Power Automate flows run only when they genuinely add business value, rather than during behind-the-scenes system updates.

Steps to Perform

For demonstration purposes, the logic is implemented using a desktop application. The objective is to clearly compare a standard update operation with one that bypasses Power Automate flow execution.

In both scenarios:

  • The same account record is updated
  • The only difference is whether the bypass flag is applied during the update request

Update Event Without Bypass

In this scenario, the record is updated using the standard SDK request without any bypass flag.

/// <summary>

/// Update the record Account record

/// </summary>

/// <param name="service"></param>

private static void UpdateRecord(CrmServiceClient service, string accountName, Guid accountId)

{

try

{

//Step 1: Get Record to update

Entity ent = new Entity("account", accountId);

#region Create Account name

ent["name"] = accountName;

#endregion

// Step 2: Update

service.Update(ent);

Console.WriteLine($"Record updated successfully. {DateTime.Now}");

}

catch (Exception ex)

{

SampleHelpers.HandleException(ex);

}
}

Restrict Unwanted Power Automate flow execution

Restrict Unwanted Power Automate flow execution

Observed behavior:

  • The update operation succeeds
  • The associated Power Automate flow is triggered
  • Any downstream logic defined in the flow executes as expected (for example, SharePoint operations, notifications, or validations)

This is the default and expected behavior when performing update operations through the SDK.

Update Event with Power Automate Flow Bypass

In this scenario, the same update operation is executed, but the request includes the bypass flag to skip Power Automate flow execution.

/// <summary>

/// Update the record Account record with bypass logic

/// </summary>

/// <param name="service"></param>

private static void UpdateRecordWithBypass(CrmServiceClient service, string accountName, Guid accountId)

{

try

{

//Step 1: Get Record to update

Entity ent = new Entity("account", accountId);

#region Create entity record object

ent["name"] = accountName;

#endregion

// Step 2: Create delete request

var updateRequest = new UpdateRequest

{

Target = ent

};

// Step 3: Bypass Power Automate flows

updateRequest.Parameters.Add("SuppressCallbackRegistrationExpanderJob", true);

// Step 4: Execute

service.Execute(updateRequest);

Console.WriteLine($"Record updated with bypass successfully. {DateTime.Now}");

}

catch (Exception ex)

{

SampleHelpers.HandleException(ex);

}
}

Restrict Unwanted Power Automate flow execution

Restrict Unwanted Power Automate flow execution

Observed behavior:

  • The record is updated successfully.
  • Power Automate flow does not execute.
  • No SharePoint or automation logic tied to the flow is triggered.

This allows the system to perform controlled updates without affecting existing automation logic.

Conclusion

Bypassing Power Automate flows in Microsoft Dataverse is a powerful capability designed for advanced scenarios, such as:

  • System integrations
  • Maintenance or cleanup jobs
  • Bulk updates and data migrations

When used appropriately, it helps improve performance, avoid unnecessary automation, and maintain clean separation between business logic and technical processes.

However, this feature should be applied carefully and intentionally. Overusing it can lead to missed automation or inconsistent system behavior. When used in the right context, it results in cleaner implementations and more predictable outcomes.

The post How to restrict Unwanted Power Automate flow execution first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • How the Sales Qualification Agent Transforms Lead Management in Dynamics 365
    In the rapidly evolving landscape of sales automation, staying ahead means leveraging the latest AI capabilities. The Sales Qualification Agent is a significant advancement in Microsoft Dynamics 365 Sales, designed to revolutionize lead qualification processes. This advanced AI agent functions as an autonomous member of the sales team. It is capable of qualifying leads by directly engaging with potential customers, gathering critical information, and seamlessly handing off qualified leads to hu
     

How the Sales Qualification Agent Transforms Lead Management in Dynamics 365

In the rapidly evolving landscape of sales automation, staying ahead means leveraging the latest AI capabilities. The Sales Qualification Agent is a significant advancement in Microsoft Dynamics 365 Sales, designed to revolutionize lead qualification processes.

This advanced AI agent functions as an autonomous member of the sales team. It is capable of qualifying leads by directly engaging with potential customers, gathering critical information, and seamlessly handing off qualified leads to human sellers when specific criteria are met.

This guide provides a comprehensive technical walk-through for configuring and deploying the Sales Qualification Agent, covering capabilities from basic research to autonomous customer engagement.

Prerequisites and Infrastructure Setup

Before configuring the agent, the underlying environment must be prepared to support secure data access and automation.

App Registration and Application User Configuration

To enable the agent to interact securely with Dataverse, an application must be registered in Microsoft Entra ID (formerly Azure Active Directory), and a corresponding application user must be created in Dynamics 365.

Please refer to the official Microsoft documentation for these standard configuration steps:

Ensure the Application User is assigned the necessary security roles before proceeding.

Configuring the Sales Qualification Agent

Access the Sales Hub application and navigate to the Copilot for Sales or Sales IQ settings area (location may vary based on version).

Step 1: Enable the Sales Qualification Feature

Locate and enable the “Sales qualification” preview feature to activate the agent’s capabilities.

Sales Qualification Agent

Step 2: Configure Automation Levels

The agent’s level of autonomy can be configured to match organizational requirements and infrastructure readiness.

Sales Qualification Agent

Two distinct operating modes are available:

  1. Research-only (Demonstrated Mode):
  • Functionality: The agent analyzes leads and generates insights, drafting suggested emails for sellers to review and send manually.
  • Prerequisites: Standard App Registration and Application User.
  • Use Case: Ideal for organizations evaluating AI capabilities prior to implementing fully autonomous external communication.
  1. Research and engage:
    • Functionality: The agent operates autonomously, performing lead research and sending emails directly to engage with customers without human intervention.

Additional Technical Requirements:

  • Shared Mailbox: A shared mailbox must be provisioned in the Microsoft 365 Admin Center to serve as the agent’s sending address.
  • Server-Side Synchronization: The shared mailbox must be enabled for Server-Side Synchronization within Dynamics 365 to facilitate automated email transmission.

Use Case: High-volume sales environments requiring immediate, automated lead qualification and follow-up.

Quick Note: This guide utilizes the Research-only mode to demonstrate core analytical capabilities without requiring immediate Exchange infrastructure modifications.

Step 3: Configure Agent Profile

Define the agent’s persona, including its display name and tone of voice. This configuration ensures the agent’s communication aligns with corporate branding standards.

Sales Qualification Agent

Step 4: Add Knowledge Sources    

To function effectively, the agent requires context about the organization. Input comprehensive company details to ensure accurate representation during interactions.

Sales Qualification Agent

Step 5: Define Products and Value Propositions

The agent must be trained on the product portfolio. By accurately defining products and their specific value propositions, the agent is equipped to address customer inquiries and articulate benefits effectively.

Sales Qualification Agent

Configuring Agent Logic: Criteria and Routing

This section covers the configuration of the agent’s decision-making logic, defining target audiences and handoff triggers.

Selection Criteria

Establish precise criteria to define which leads the agent should process. Filters can be applied based on attributes such as lead source, industry, or geography to ensure resource optimization.

Sales Qualification Agent

Handoff Criteria

Define the logic for determining when a lead is considered “qualified.” Copilot can assist in generating these natural language rules. A typical rule might be: “Hand off when the customer expresses clear purchase intent or requests a quotation.”

Sales Qualification Agent

Assignment Rules

Configure routing rules to determine lead ownership post-qualification. These rules ensure that qualified leads are programmatically assigned to the appropriate seller or sales team.

Sales Qualification Agent

Reviewing Performance and Handoff

Upon completion of the setup, the agent requires a brief initialization period to begin processing valid leads.

Monitoring Performance and Check Result

The Agent Insights dashboard provides real-time metrics on agent performance. Key analytics include the volume of processed leads, qualification rates, and conversation efficiency, allowing for data-driven optimization of the agent’s configuration.

Sales Qualification Agent

To track the effectiveness of the qualification process, the “Leads handed over by AI Agent” view offers a consolidated list of leads successfully transferred to human sellers. This view enables supervisors to monitor lead volume per seller and ensure smooth transitions.

Sales Qualification Agent

Conclusion

The Sales Qualification Agent represents a paradigm shift in automated lead management. By delegating initial research and qualification tasks to AI, organizations can enable their sales professionals to focus exclusively on high-value activities, such as closing deals and nurturing qualified relationships.

FAQs: Sales Qualification Agent in Dynamics 365

1. What is the Sales Qualification Agent in Dynamics 365?

The Sales Qualification Agent is an AI-powered feature in Microsoft Dynamics 365 Sales that autonomously qualifies leads, engages with potential customers, and hands off qualified leads to human sellers, streamlining the sales process and saving time for sales teams.

2. How does the Sales Qualification Agent improve lead management?

By automating lead research, initial outreach, and qualification, the agent ensures faster lead response times, reduces manual effort, and helps sales professionals focus on high-value activities like closing deals and nurturing relationships.

3. Can the Sales Qualification Agent send emails directly to leads?

Yes. In the Research and Engage mode, the agent can send emails autonomously using a shared mailbox configured in Dynamics 365, while in Research-only mode, it drafts suggested emails for sellers to review and send manually.

4. What are the prerequisites for configuring the Sales Qualification Agent?

Before setup, you need:

An application registered in Microsoft Entra ID (formerly Azure AD)

A corresponding application user in Dynamics 365 Sales

Proper security roles assigned to the application user

Optional: A shared mailbox for automated email sending

5. How do I define which leads the agent should qualify?

The agent uses selection criteria based on attributes like lead source, industry, geography, or custom rules you configure. You can also set handoff criteria to determine when a lead is considered qualified.

The post How the Sales Qualification Agent Transforms Lead Management in Dynamics 365 first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • Generating and Sharing Screen in PDFs from D365 Sales Using Canvas Apps
    In many Dynamics 365 Sales implementations, sales users need a simple and intuitive way to preview a quote, generate a PDF, and share it with customers. Traditionally, this requirement is handled using Word templates, which often feel rigid, require backend configuration, and do not provide a smooth preview experience for users. Microsoft has introduced PDF generation and PDF preview capabilities in Canvas apps, making it possible to convert Canvas app screens or containers into PDF files and p
     

Generating and Sharing Screen in PDFs from D365 Sales Using Canvas Apps

Canvas Apps

In many Dynamics 365 Sales implementations, sales users need a simple and intuitive way to preview a quote, generate a PDF, and share it with customers. Traditionally, this requirement is handled using Word templates, which often feel rigid, require backend configuration, and do not provide a smooth preview experience for users.

Microsoft has introduced PDF generation and PDF preview capabilities in Canvas apps, making it possible to convert Canvas app screens or containers into PDF files and preview them directly within the app. These capabilities open new possibilities for creating user-friendly, preview-first document generation experiences in D365 Sales.

In this blog, we demonstrate how to build a Canvas app that allows users to view quotes, preview quote details as a PDF, and prepare the PDF for sharing, all using native Power Apps functionality.

How This Works (High-Level Overview)

This approach uses a Canvas app embedded in D365 Sales to display quote data. A specific container holding the quote layout is converted into a PDF using the PDF() function. The generated PDF is stored in a variable and passed to the PDF Viewer control, allowing users to preview the document before sharing or processing it further.

App Design Overview

To keep the user experience simple and intuitive, the app is designed with two screens.

Screen 1: Active Quotes

The first screen displays active quotes in a gallery, as shown below.

This screen acts as the entry point for the user and allows quick selection of a quote.

When a user selects a quote:

  • The selected quote is stored in a variable
  • The app navigates to the quote preview screen

Generating and Sharing Screen in PDFs

This approach keeps quote selection fast and avoids unnecessary navigation between screens.

Screen 2: Quote Details and Quote Preview

The second screen is designed to display quote details and a PDF preview side by side.

On this screen, I have used two containers:

  • One container to display the quote details
  • Another container to preview how the quote will appear in the PDF Viewer

To display the PDF in the PDF Viewer, the following approach is used:

Generating the PDF

The PDF() function is used to generate a PDF from the quote details container.
The generated PDF is stored in a variable (MyPdf).

Generating and Sharing Screen in PDFs

This ensures that the same layout used to display quote details is reused for PDF generation.

Previewing the PDF

The MyPdf variable is then passed to the PDF Viewer control, allowing users to preview exactly how the PDF will look before it is shared.

Generating and Sharing Screen in PDFs

This provides a true “what you see is what you get” experience for the user.

Below is how the page layout looks with the quote details on one side and the PDF preview on the other.

Generating and Sharing Screen in PDFs

 

Generating and Sharing Screen in PDFs

Important Note on Experimental Features

At the time of writing, both the PDF () function and the PDF Viewer control are marked as Experimental features in Power Apps.

Benefits of This Approach

  • Preview-first user experience
  • No dependency on Word templates
  • Flexible and easily customizable layouts
  • Consistent PDF output
  • Simple integration with Power Automate for further processing

Real-World Use Cases

This pattern can be applied across multiple D365 Sales and business scenarios, including:

  • Quote generation and sharing
  • Invoice previews
  • Order confirmations
  • Service reports
  • Custom sales documents

The same reusable layout approach ensures consistency across documents while keeping the user experience simple.

FAQs

Can Canvas apps generate PDFs in D365 Sales?
Yes. Canvas apps support the PDF() function, which allows screens or containers to be converted into PDF files that can be previewed or shared.

Do I need Word templates to generate PDFs in D365 Sales?
No. This approach removes the dependency on Word templates by generating PDFs directly from Canvas app layouts.

Can users preview PDFs before sharing them?
Yes. The PDF Viewer control allows users to preview the generated PDF inside the Canvas app before sharing.

Can this be integrated with Power Automate?
Yes. The generated PDF can be easily passed to Power Automate for emailing, storage, or further processing.

Conclusion:

By combining Canvas apps with the PDF() function and PDF Viewer control, it is now possible to create lightweight and flexible document generation experiences directly within D365 Sales.

This approach allows users to preview, generate, and share quote PDFs using a single reusable layout, improving usability and reducing dependency on backend templates.

The same pattern can be extended to other scenarios such as invoices, orders, service reports, or any use case where formatted documents are required.

The post Generating and Sharing Screen in PDFs from D365 Sales Using Canvas Apps first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • How to Customize the Task Pane in Project Operations
    Earlier, the Task Details pane in Project Operations was standard and not customizable at admin level. When users clicked the info icon present next to a task, a fixed task pane from Project showed only predefined fields. Organizations could not add custom fields or modify the layout, so users often had to navigate away from the task grid to view or update additional task details. Now, with the latest update, the task pane is customizable. Using this feature, we can customize tasks of a project
     

How to Customize the Task Pane in Project Operations

Earlier, the Task Details pane in Project Operations was standard and not customizable at admin level. When users clicked the info icon present next to a task, a fixed task pane from Project showed only predefined fields. Organizations could not add custom fields or modify the layout, so users often had to navigate away from the task grid to view or update additional task details.

Now, with the latest update, the task pane is customizable. Using this feature, we can customize tasks of a project within the Task Pane itself. This allows users to view and update all relevant task information directly from the task pane, improving efficiency while still keeping the option to access the original standard view if needed.

In this blog, let us explore how to Customize Task Pane in Project Operations.

  • To enable theCustomize Task Pane feature in Project Operations, navigate to the Parameters entity present in the Settings
  • Click on the Feature Control button on the ribbon bar, as shown below.

How to Customize the Task Pane in Project Operations

  • After enabling Customize Task Pane feature, navigate to Projects area under Projects group click on Projects

Note: Choose Existing Project or create a New Project.

How to Customize the Task Pane in Project Operations

  • Let us create a project with all the required details, save it and then navigate to the Tasks tab

How to Customize the Task Pane in Project Operations

How to Customize the Task Pane in Project Operations

  • Add the tasks required to complete your project. Click on any task’s ⓘ icon to view its details in the new Task Side Pane.

How to Customize the Task Pane in Project Operations

How to Customize the Task Pane in Project Operations

  • Now, let us navigate to Power Apps to customize the Task Side Pane. This customization allows users to view and update task details directly from the Task Grid, eliminating the need to open separate pages or navigate away from their current screen.
  • As a result, users can manage tasks more efficiently within a single, streamlined interface.

 Steps to customize the Task Pane:

1. Navigate to Power Apps and click on Tables as below –

How to Customize the Task Pane in Project Operations

2. Search Project Task form and click on it.

How to Customize the Task Pane in Project Operations

3. In Data experiences select Forms & click on Task details

How to Customize the Task Pane in Project Operations

How to Customize the Task Pane in Project Operations

How to Customize the Task Pane in Project Operations

4. Customize the Task details form according to your requirements.

Note: You can customize the task pane whenever you want in Task details and publish it according to your requirements.

How to Customize the Task Pane in Project Operations

Note:  A Resource field is added in Task details form for customization.

5. Click on Save and Publish button to apply the changes.

How to Customize the Task Pane in Project Operations

6. Navigate to Project, Click on Tasks tab and click on any Task’s ⓘ icon to view Task details in new Task Side Pane.

How to Customize the Task Pane in Project OperationsNote: Resource field added in Task details form is visible in Task Pane.

Conclusion:

This approach allows us to customize the Task Details form so users can view and update additional task information directly in the Task Side Pane, without navigating away from the task grid.

The post How to Customize the Task Pane in Project Operations first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • Static or Dynamic Segments? A Complete Guide for Customer Insights – Journeys
    Dynamics 365 Customer Insights – Journeys is one of the most preferred modules offered by Microsoft, which provides a flexible platform that enables organizations to engage their audience across every stage of the customer journey. It empowers businesses to create personalized, timely, and meaningful interactions based on customer behavior and preferences. As we all know, understanding your customers is not a one-time activity; it is a continuous journey that evolves with every interaction. Tar
     

Static or Dynamic Segments? A Complete Guide for Customer Insights – Journeys

Static or Dynamic Segments A Complete Guide for Customer Insights – Journeys

Dynamics 365 Customer Insights – Journeys is one of the most preferred modules offered by Microsoft, which provides a flexible platform that enables organizations to engage their audience across every stage of the customer journey.

It empowers businesses to create personalized, timely, and meaningful interactions based on customer behavior and preferences. As we all know, understanding your customers is not a one-time activity; it is a continuous journey that evolves with every interaction. Targeting the right audience to perform any marketing activity is the key to success.

When running marketing activities in Customer Insights – Journeys, the most important question to answer is:

“Who exactly should receive this message or journey?”

This is where Segments play a key role.

What Are Segments in Customer Insights – Journeys?

Segments in Customer Insights – Journeys allow you to group Contacts or Leads based on a defined set of attributes or behaviors. These segments act as the foundation for targeting audiences in real-time journeys, emails, and other marketing engagements.

Using segments, you can:

  • Filter Contacts or Leads using specific conditions
  • Target audiences based on demographic or behavioral data
  • Ensure messages reach the most relevant audience

Segments help transform generic marketing campaigns into highly targeted and strategic journeys.

Types of Segments in Customer Insights – Journeys

There are two types of segments available in Customer Insights – Journeys:

  1. Automatic Refresh (Dynamic Segment)
  2. Static Snapshot (Static Segment)

In earlier versions of the Marketing app, users could choose the segment type directly while clicking the New Segment button. In the current Real-time Journeys experience, this behavior has changed.

Now, you must:

  1. Create the segment first
  2. Define the segment type later from the Settings panel

Where to Find Segments in Customer Insights – Journeys

To access segments:

  • Go to Customer Insights – Journeys
  • Navigate to Real-time journeys
  • Select Audience → Segments

At the segment record level, you will notice a Type field that indicates whether the segment is configured as a Static Snapshot or an Automatic Refresh.

Dynamics 365 Customer Insights Journeys Segments

How to Create a Segment in the New Experience

When you click New Segment, you are no longer prompted to choose between Static or Dynamic upfront. Instead, the segment is created first, and its behavior is defined later.

During creation, you have two options:

  1. Using Query Assist (Copilot)

Query Assist allows Copilot AI to help generate segment logic.

  • Start typing in the Query Assist box
  • Select a predefined prompt such as “Contacts who opened an email”
  • Click Create

You can either:

  • Click Use to apply the suggested query
  • Or click Create manually to skip AI assistance

Once selected, Copilot helps build the initial query structure based on the chosen prompt.

Dynamics 365 Customer Insights Journeys Segments

During creation, you have two options:

  1. Using Query Assist (Copilot)

Query Assist allows Copilot AI to help generate segment logic.

  • Start typing in the Query Assist box
  • Select a predefined prompt such as “Contacts who opened an email”
  • Click Create

Dynamics 365 Customer Insights Journeys Segments

You can either:

  • Click Use to apply the suggested query
  • Or click Create manually to skip AI assistance

Dynamics 365 Customer Insights Journeys Segments

Once selected, Copilot helps build the initial query structure based on the chosen prompt.

Dynamics 365 Customer Insights Journeys Segments

You can find more details in this doc for understanding the building segment with “Query Assist”.

  1. Creating a Segment Manually

If you prefer full control:

  • Click Create manually, or
  • Leave the Query Assist box empty and click Create

This opens the Segment Builder, where you can define your logic from scratch.

Dynamics 365 Customer Insights Journeys Segments

Building Segment Logic Using the Segment Builder

Inside the Segment Builder, you can define segment criteria using:

  • Attribute-based conditions (e.g., Industry, Country, Job Title)
  • Behavioral conditions (e.g., email opens, form submissions)
  • Include or Exclude specific Leads or Contacts

You can explicitly include or exclude records. Even if a record does not meet the defined conditions, it will always be included or excluded if specified manually.

If you include or exclude any lead explicitly, then even if it has not met the pre-defined condition, it will always be filtered and will be included or excluded from the segment member list.

When you open Segment Builder, you can start creating a new group by clicking on the desired options (either Attribute or Behavioral). if required, you can Include or Exclude a particular audience as well.

Refer screenshots below:

Dynamics 365 Customer Insights Journeys Segments

Dynamics 365 Customer Insights Journeys Segments

Example Use Case: Targeting Manufacturing Leads from India

Let’s consider a practical example.

Use case:
Target Leads from the Manufacturing sector located in India.

Segment conditions:

  • Industry equals Manufacturing Services
  • Country/Region equals India

You can create an Attribute group and define these conditions accordingly. Once the logic is complete, save the segment.

Before activating it, you can preview the audience size.

Estimating Segment Size Before Activation

Before marking a segment as Ready to use, you can:

  • Click Estimate to preview the expected number of segment members
  • Review the estimated member count to validate your logic

This helps ensure your targeting criteria are accurate before using the segment in a journey.

Dynamics 365 Customer Insights Journeys Segments

Dynamics 365 Customer Insights Journeys Segments

The members count can be previewed from here:

Dynamics 365 Customer Insights Journeys Segments

Segment Settings: Static Snapshot vs Automatic Refresh

The Settings panel is where the segment type is defined.

By default, all newly created segments are set to Automatic Refresh.

Dynamics 365 Customer Insights Journeys Segments

Let us consider one of the Dynamic Segment graphs. As you can see in the graph below, the segment size has been updated (increased) over the duration.

Dynamics 365 Customer Insights Journeys Segments

If there is a use case where you want to create the Segment as Static, then explicitly you have to set it as “Static Snapshot” option as shown below:

Dynamics 365 Customer Insights Journeys Segments

In “Static Snapshot,” the simple segment size will not be updated dynamically, and it is of one-time use.

Let us consider one of the Static Segment graphs, as you can see in the graph below, the segment size has remained standstill over the duration, as it was a one-time activity.

Dynamics 365 Customer Insights Journeys Segments

Key Differences: Static Snapshot vs Automatic Refresh

Feature Static Snapshot Automatic Refresh
Membership updates No Yes
Audience type Fixed Dynamic
Best suited for One-time campaigns Ongoing journeys
Data refresh One-time Continuous
Real-time targeting Not supported Supported

 

FAQs

What Is Automatic Refresh (Dynamic Segment)?

In Automatic Refresh, the segment membership updates dynamically.

This means:

  • New Contacts or Leads that meet the criteria are automatically added
  • Records that no longer meet the criteria are removed
  • The segment size changes continuously over time

Dynamic segments are ideal for:

  • Ongoing marketing journeys
  • Real-time audience targeting
  • Long-running nurture campaigns

You can observe these changes visually through segment growth graphs, where the member count increases or decreases over time.

What Is Static Snapshot (Static Segment)?

In Static Snapshot, the segment captures audience members at a specific point in time.

This means:

  • Segment membership does not update after activation
  • The audience remains fixed
  • It is typically used for one-time activities

Static Snapshot segments are best suited for:

  • One-time email campaigns
  • Event invitations
  • Compliance or audit-based targeting

Segment graphs for Static Snapshot segments show a flat line, indicating no change in membership over time.

When Should You Use Each Segment Type?

  • Use Automatic Refresh when your audience changes frequently and journeys run continuously.
  • Use Static Snapshot when you need a fixed audience for a specific moment or campaign.

Choosing the right segment type ensures accurate targeting and optimal journey performance.

Conclusion

Segments play a critical role in successfully targeting audiences within Customer Insights – Journeys. Whether you are grouping customers based on demographic attributes or behavioral interactions, segments allow you to make your marketing more strategic and data-driven.

Automatic Refresh segments are ideal for real-time, evolving journeys, while Static Snapshot segments are best suited for one-time or fixed audience scenarios. Understanding the difference between these two options helps you design more effective journeys and deliver the right message to the right audience at the right time.

The post Static or Dynamic Segments? A Complete Guide for Customer Insights – Journeys first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • Automating Business PDFs Using Azure Document Intelligence and Power Automate
    In today’s data-driven enterprises, critical business information often arrives in the form of PDFs—bank statements, invoices, policy documents, reports, and contracts. Although these files contain valuable information, turning them into structured, reusable data or finalized business documents often requires significant manual effort and is highly error-prone. By leveraging Azure Document Intelligence (for PDF data extraction), Azure Functions (for custom business logic), and Power Automate (f
     

Automating Business PDFs Using Azure Document Intelligence and Power Automate

In today’s data-driven enterprises, critical business information often arrives in the form of PDFs—bank statements, invoices, policy documents, reports, and contracts. Although these files contain valuable information, turning them into structured, reusable data or finalized business documents often requires significant manual effort and is highly error-prone.
By leveraging Azure Document Intelligence (for PDF data extraction), Azure Functions (for custom business logic), and Power Automate (for workflow orchestration) together, businesses can create a seamless automation pipeline that interprets PDF content, transforms extracted information through business rules, and produces finalized documents automatically, eliminating repetitive manual work and improving overall efficiency.
In this blog, we will explore how these Azure services work together to automate document creation from business PDFs in a scalable and reliable way.

Use Case: Automatically Converting Bank Statement PDFs into CSV Files

Let’s consider a potential use case.
The finance team receives bank statements as PDF attachments in a shared mailbox on a regular basis. These statements contain transaction details in tabular format, but extracting the data manually into Excel or CSV files is time-consuming and often leads to formatting issues such as broken rows, missing dates, and incorrect debit or credit values.
The goal is to automatically process these emailed PDF bank statements as soon as they arrive, extract the transaction data accurately, and generate a clean, structured CSV file that can be directly used for reconciliation and financial reporting.
By using Power Automate to monitor incoming emails, Azure Document Intelligence to analyze the PDFs, and Azure Functions to apply custom data-cleaning logic, the entire process can be automated, eliminating manual effort and ensuring consistent, reliable output.
Let’s walk through the steps below to achieve this requirement.

Prerequisites:

Before we get started, we need to have the following things ready:
• Azure subscription.
• Access to Power Automate to create email-triggered flows.
• Visual Studio 2022

Step 1:

Navigate to the Azure portal (https://portal.azure.com), search for the Azure Document Intelligence service, and click Create to provision a new resource.

Azure Document Intelligence

Step 2:

Choose Azure subscription 1 as the subscription, create a new resource group, enter an appropriate name for the Document Intelligence instance, select the desired pricing tier, and click Review + Create to proceed.

Azure Document Intelligence

Step 3:

After reviewing the configuration, click Create and wait for the deployment to complete. Once the deployment is finished, select Go to resource.

Azure Document Intelligence

Step 4:

Navigate to the newly created Document Intelligence resource, and make a note of the endpoint and any one of the keys listed at the bottom of the page.

Azure Document Intelligence

Step 5:

Create a new Azure Function in Visual Studio 2022 using an HTTP trigger with the .NET isolated worker model, and add the following code.

[Function("PdfToCsvExtractor")]
public async Task Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "post")] HttpRequest req)
{
_logger.LogInformation("Form Recognizer extraction triggered.");

// Accept either multipart/form-data (file field) OR raw application/pdf bytes.
Stream pdfStream = null;

try
{
// If content-type is multipart/form-data => read form and file
if (req.HasFormContentType)
{
var form = await req.ReadFormAsync();
var file = form.Files?.FirstOrDefault();
if (file == null || file.Length == 0)
return new BadRequestObjectResult("No file was uploaded in the multipart form-data.");

pdfStream = new MemoryStream();
await file.CopyToAsync(pdfStream);
pdfStream.Position = 0;
}
else
{
// Otherwise expect raw PDF bytes with Content-Type: application/pdf
if (!req.Body.CanRead)
return new BadRequestObjectResult("Request body empty.");

pdfStream = new MemoryStream();
await req.Body.CopyToAsync(pdfStream);
pdfStream.Position = 0;
}

string endpoint = Environment.GetEnvironmentVariable("FORM_RECOGNIZER_ENDPOINT");
string key = Environment.GetEnvironmentVariable("FORM_RECOGNIZER_KEY");
if (string.IsNullOrEmpty(endpoint) || string.IsNullOrEmpty(key))
return new BadRequestObjectResult("Missing Form Recognizer environment variables.");

var credential = new AzureKeyCredential(key);
var client = new DocumentAnalysisClient(new Uri(endpoint), credential);

var operation = await client.AnalyzeDocumentAsync(
WaitUntil.Completed,
"prebuilt-document",
pdfStream
);
var result = operation.Value;
_logger.LogInformation("pdfstream: " + pdfStream);

_logger.LogInformation("Result: "+ result.Tables.ToList());

// returns raw JSON table data
var filteredTables = result.Tables.ToList());
if (filteredTables.Count == 0)
return new BadRequestObjectResult("No transaction table found.");

string csvOutput = BuildCsvFromTables(filteredTables);

var csvBytes = Encoding.UTF8.GetBytes(csvOutput);

var emailResult = await SendEmailWithCsvAsync(
_logger,
csvBytes,
"ExtractedTable.csv");

return new OkObjectResult(“Table data extracted and exported to csv file”);
}
catch (Exception ex)
{
_logger.LogError(ex, ex.Message);
return new StatusCodeResult(500);
}
finally
{
pdfStream?.Dispose();
}
}

//method to create csv file
private string BuildCsvFromTables(IReadOnlyList tables)
{
var csvBuilder = new StringBuilder();
// Write CSV header
csvBuilder.AppendLine("Date,Transaction,Debit,Credit,Balance");
foreach (var table in tables)
{
// Group cells by row index
var rows = table.Cells
.GroupBy(c => c.RowIndex)
.OrderBy(g => g.Key);
foreach (var row in rows)
{
// Skip header row (row index 0)
if (row.Key == 0)
continue;
var rowValues = new string[5];
foreach (var cell in row)
{
if (cell.ColumnIndex < rowValues.Length)
{
// Clean commas and line breaks for CSV safety
rowValues[cell.ColumnIndex] =
cell.Content.Replace(",", " ").Replace("\n", " ").Trim();
}
}
csvBuilder.AppendLine(string.Join(",", rowValues));
}
}
return csvBuilder.ToString();
}

// method to send csv file as an attachment to an email
public async Task SendEmailWithCsvAsync(
ILogger log,
byte[] csvBytes,
string csvFileName)
{
log.LogInformation("Inside AzureSendEmailOnSuccess");

string clientId = Environment.GetEnvironmentVariable("InogicFunctionApp_client_id");
string clientSecret =Environment.GetEnvironmentVariable("InogicFunctionApp_client_secret");
string tenantId = Environment.GetEnvironmentVariable("Tenant_ID");
string receiverEmail = Environment.GetEnvironmentVariable("ReceiverEmail");
string senderEmail = Environment.GetEnvironmentVariable("SenderEmail");

var missing = new List();

if (string.IsNullOrEmpty(clientId)) missing.Add(nameof(clientId));
if (string.IsNullOrEmpty(clientSecret)) missing.Add(nameof(clientSecret));
if (string.IsNullOrEmpty(tenantId)) missing.Add(nameof(tenantId));
if (string.IsNullOrEmpty(receiverEmail)) missing.Add(nameof(receiverEmail));
if (string.IsNullOrEmpty(senderEmail)) missing.Add(nameof(senderEmail));

if (missing.Count > 0)
{
return new BadRequestObjectResult(
new { message = "Missing: " + string.Join(", ", missing) }
);
}

var app = ConfidentialClientApplicationBuilder
.Create(clientId)
.WithClientSecret(clientSecret)
.WithAuthority($"https://login.microsoftonline.com/{tenantId}")
.Build();

var result = await app.AcquireTokenForClient(
new[] { "https://graph.microsoft.com/.default" })
.ExecuteAsync();

string token = result.AccessToken;

string emailBody =
"Hello,

"
+ "Please find attached the extracted CSV.

"
+ "Regards,
Inogic Developer.";

var attachment = new Dictionary<string, object>
{
{ "@odata.type", "#microsoft.graph.fileAttachment" },
{ "name", csvFileName },
{ "contentType", "text/csv" },
{ "contentBytes", Convert.ToBase64String(csvBytes) }
};

var emailPayload = new Dictionary<string, object>
{
{
"message",
new Dictionary<string, object>
{
{ "subject", "Extracted PDF Table CSV" },
{
"body",
new Dictionary<string, object>
{
{ "contentType", "HTML" },
{ "content", emailBody }
}
},
{
"toRecipients",
new[]
{
new Dictionary<string, object>
{
{
"emailAddress",
new Dictionary<string, object>
{
{ "address", receiverEmail }
}
}
}
}
},
{ "attachments", new[] { attachment } }
}
},
{ "saveToSentItems", "false" }
};

string json = JsonSerializer.Serialize(emailPayload);

using var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Authorization =
new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", token);

var httpContent = new StringContent(json, Encoding.UTF8, "application/json");

var response = await httpClient.PostAsync(
$"https://graph.microsoft.com/v1.0/users/{senderEmail}/sendMail",
httpContent
);

if (response.IsSuccessStatusCode)
return new OkObjectResult("CSV Email sent successfully.");

string errorBody = await response.Content.ReadAsStringAsync();
log.LogError($"Graph Error: {response.StatusCode} - {errorBody}");
return new StatusCodeResult(500);
}

Step 6:

Build the Azure Function project in Visual Studio and publish it to the Azure portal.

Step 7:

Open https://make.powerautomate.com and create a new cloud flow using the When a new email arrives in a shared mailbox (V2) trigger. Enter the shared mailbox email address in Original Mailbox Address, and set both Only with Attachments and Include Attachments to Yes.

Azure Document Intelligence

Step 8:

Add a Condition action to verify that the attachment type is PDF.

Azure Document Intelligence

Step 9:

If the condition is met, in the Yes branch add the Get Attachment (V2) action. Configure Message Id using the value from the trigger and Attachment Id using the value from the current loop item and the email address of the shared mailbox.

Azure Document Intelligence

Step 10:

Add a Compose action to convert the attachment content bytes to Base64 using the following expression:
base64(outputs(‘Get_Attachment_(V2)’)?[‘body/contentBytes’])

Step 11:

Add another Compose action to convert the Base64 output from the previous step into a string using:
base64ToString(outputs(‘Compose’))

Step 12:

Add an HTTP (Premium) action, set the method to POST, provide the URL of the published Azure Function, and configure the request body as shown below:

{
"$content-type": "application/pdf",
"$content": "@{outputs('Compose_2')}"
}

Azure Document Intelligence

To test the setup, send an email to the shared mailbox with the sample PDF attached.
Note: For demonstration purposes, a simplified one-page bank statement PDF is used. Real-world bank statements may contain multi-page tables, wrapped rows, and inconsistent layouts, which are handled through additional parsing logic.

Input PDF file:

Azure Document Intelligence

Output CSV file:

Azure Document Intelligence

Conclusion:

This blog demonstrated how an email-driven automation pipeline can simplify the processing of business PDFs by converting them into structured, usable data.
By combining Power Automate for orchestration, Azure Functions for custom processing, and Azure Document Intelligence for AI-based document analysis, organizations can build scalable, reliable, and low-maintenance document automation solutions that eliminate manual effort and reduce errors.

Frequently Asked Questions:

1. What is Azure Document Intelligence used for?
Azure Document Intelligence is used to extract structured data from unstructured documents such as PDFs, images, invoices, receipts, contracts, and bank statements using AI models.

2. How does Azure Document Intelligence extract data from PDF files?
It analyzes PDF content using prebuilt or custom AI models to identify text, tables, key-value pairs, and document structure, and returns the extracted data in a structured JSON format.

3. Can Power Automate process PDF attachments automatically?
Yes. Power Automate can automatically detect incoming PDF attachments from email, SharePoint, or OneDrive and trigger workflows to process them using Azure services.

4. How do Azure Functions integrate with Power Automate?
Power Automate can call Azure Functions via HTTP actions, allowing custom business logic, data transformation, and validation to run as part of an automated workflow.

5. Is Azure Document Intelligence suitable for bank statements and invoices?
Yes. Azure Document Intelligence can accurately extract tables, transaction data, and key fields from bank statements, invoices, and other financial documents.

The post Automating Business PDFs Using Azure Document Intelligence and Power Automate first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

Building Standalone Apps with Power Apps Code Apps: Using Dataverse and Office 365 Users Connectors (Part 1)

Power Apps

In the Dynamics 365 and Power Apps ecosystem, we have several options for building applications, each one is for a specific type of requirement. Model-driven Apps works well when we need a structured UI with standard components, while we use Canvas Apps to create custom, mobile-friendly interfaces with a low-code approach. Recently, Microsoft introduced another application type called Code Apps, which offers a completely different way to build applications using pro code approach.

With the introduction of  Power Apps Code Apps, things have changed. Code Apps let us build standalone single page applications using modern web frameworks. These are independent applications that cannot be integrated with Canvas Apps or Model-driven Apps.

The best part is that we get direct access to more than 1,500 standard and premium connectors through the Power Apps SDK. We do not have to write any authentication code, no OAuth flows, no custom APIs, no middleware. We just have to connect and use.

In this article, we’ll walk you through creating a Code App from scratch. We’ll build Personal Dashboard, a simple application that pulls assigned cases and leads from Dataverse and shows current logged in user details using the Office 365 Users and Dataverse connectors.

What Makes Code Apps Different?

We can build a UI of our own choice and connect to a wide range of data sources using more than 1,500 standard and premium connectors provided by the Power Platform. All connections are secure because the Power Apps SDK handles authentication, and each connector enforces user-level permissions. This means the app can only access data that the signed-in user is allowed to see, so there’s no need to write custom authentication code.

Code Apps provide a balanced approach with several key advantages:

  • A standalone application that runs directly within Power Platform
  • Full development with modern web frameworks such as React, Vue, or Angular, with support for  your preferred libraries
  • Direct access to connectors through the Power Apps SDK without custom authentication code
  • Streamlined deployment through a single command to your environment

The connector integration is particularly valuable. Whether the need is to query Dataverse, access current user profile details, or use other services, the connector can be called directly. There’s no need to configure service principals, manage app registrations, or implement token management. The integration works seamlessly within the platform.

Prerequisites

Before getting started, we have to make sure the following prerequisites are in place:

  • Power Apps Premium license with Code Apps enabled environment
  • Visual Studio Code installed
  • Node.js LTS version
  • Power Platform Tools for VS Code extension

Step 1: Setting Up the Code App

Let’s create the app. Open VS Code, launch a PowerShell terminal, and run the following command:

npm create vite@latest PersonalDashboard — –template react-ts

For this application, we are using React as the framework and TypeScript as the variant. After that, navigate to the project folder and install the dependencies:

npm install

Install the node type definitions:

npm i –save-dev @types/node

After executing these commands, the project structure will appear as shown in the image below.

PowerAppsCode

According to the official Microsoft documentation, the Power Apps SDK currently requires the port to be 3000 in the default configuration. To configure this, open vite.config.ts and replace the content with the following code:

import { defineConfig } from 'vite'

import react from '@vitejs/plugin-react'

import * as path from 'path'

 

// https://vite.dev/config/

export default defineConfig({

base: "./",

server: {

host: "::",

port: 3000,

},

plugins: [react()],

resolve: {

alias: {

"@": path.resolve(__dirname, "./src"),

},

},

});

Note for Mac users: It may be necessary to modify the package.json scripts section.

Change from:

"scripts":  {

"dev": "start vite && start pac code run",

"build": "tsc -b && vite build",

"lint": "eslint .",

"preview": "vite preview"

}

to this

"scripts": {
"dev": "vite && pac code run",
"build": "tsc -b && vite build",
"lint": "eslint .",
"preview": "vite preview"
}

Save the file and run the Code App locally by executing:

npm run dev

Browse to http://localhost:3000. If the application loads successfully, press Ctrl+C to stop the server.

Step 2: Initialize the Code App

First authenticate to Power Platform:

pac auth create

After that, sign in with the credentials and select the environment:

pac env select -env <environment-url>

Initialize the Code App:

pac code init –displayName “Personal Dashboard”

This will create a power.config.json file in the project as shown in the image below.

PowerAppsCode

Now install the Power Apps SDK. This package provides APIs that allow the application to interact directly with Power Platform services and includes built-in logic to manage connections automatically as they are added or removed.

npm install –save-dev “@microsoft/power-apps

Update package.json to run both Vite and the Power Apps SDK server:

"scripts": {
"dev": "start pac code run && vite",
"build": "tsc -b && vite build",
"lint": "eslint .",
"preview": "vite preview"
}

Step 3: Configure Power Provider

 

Create PowerProvider.tsx under src and add the Power SDK context provider code given below.

 

import { initialize } from "@microsoft/power-apps/app";

import { useEffect, type ReactNode } from "react";

interface PowerProviderProps {

children: ReactNode;

}

export default function PowerProvider({ children }: PowerProviderProps) {

useEffect(() => {

const initApp = async () => {

try {

await initialize();

console.log('Power Platform SDK initialized successfully');

} catch (error) {

console.error('Failed to initialize Power Platform SDK:', error);

}

};

initApp();

}, []);

return <>{children}</>;

}

Update the main.tsx and add this line in the imports section:

import PowerProvider from './PowerProvider.tsx'

and change this code snippet

<StrictMode>
<App />
</StrictMode>,

to this

<StrictMode>

<PowerProvider>

<App />

</PowerProvider>

</StrictMode>,

Run the app by executing :

npm run dev

Open the URL provided by the Power SDK Server in the same browser profile as that of power platform tenant.

Step 4: Adding Dataverse Connector

Now comes the part where we will add the data source to our application. In this step, we’ll use the Dataverse connector to fetch assigned cases and leads for the logged-in user.

For that First, we need to create a connection:

1. Go to Power Apps and open Connections.

2. Click New Connection and select Dataverse.

Follow the instruction properly to create the connection, as shown in the

PowerAppsCode

Once the connection is ready, we have to open the terminal. For Dataverse, we have to add the tables required for the application. For this example, we’ll add the Leads and Incident (Cases) tables using the following commands:

pac code add-data-source -a dataverse -t lead

pac code add-data-source -a dataverse -t incident

PowerAppsCode

After running these commands, we can see that some files and folders are added to the project. Inside the generated folder, there are services and models folders. These contain the files for Leads, Incidents, and other tables, which can be used in the code. For example:

import { AccountsService } from './generated/services/AccountsService';

Import type { Accounts } from './generated/models/AccountsModel';

CRUD operations can be performed on Dataverse using the app. Before accessing any data, we have to initialize the Power Apps SDK to avoid errors. An async function or state check can ensure the SDK is ready before making API calls. For example:

useEffect(() => {

// Define a function of asynchronous type to properly initialize the Power Apps SDK to avoid any error during runtime

 

const init = async () => {

try {

await initialize(); // Wait for SDK initialization

setIsInitialized(true); // Mark the app as ready for data operations

} catch (err) {

setError('Failed to initialize Power Apps SDK'); // Handle initialization errors

setLoading(false); // Stop any loading indicators

}

};

 

init(); // Call the initialization function when the component mounts

}, []);

 

useEffect(() => {

If (!isInitialized) return;

 

// Place your data reading logic here

}, []);


 

Step 5: Adding Office 365 Users Connector

Similar to Dataverse, we need to create a connection for Office 365 Users by following the same steps. Once the connection is ready, we need to add it to the application. First, list all available connections to get the connection ID using command:

pac connection list

It will list all the connections available in the selected environment. We need to Copy the connection ID for Office 365 Users from the list, then add it to the project using:

pac code add-data-source -a “shared_office365users” -c “<connection-id>”

After running this command, the Office 365 Users connector will be available to use in the application, allowing access to user profiles, and other Office 365 user data.


Step 6: Building the UI

There are two ways to build a good UI. The first is the traditional coding approach where we write the complete code manually. The second is by using GitHub Copilot integrated in VS Code with the help of prompts.

Using GitHub Copilot:

We can generate the UI by writing a detailed prompt in GitHub Copilot. Here’s an example prompt:

Create a Personal Dashboard UI component in React with TypeScript that displays:

  1. A header section showing the current logged-in user’s profile information (name, email, job title, and profile photo) fetched from Office 365 Users connector
  2. Two main sections side by side:

– Left section: Display a list of assigned Cases (Incidents) from Dataverse

* Show case title, case number, priority, status, and created date

* Use card layout for each case

* Add loading state and error handling

– Right section: Display a list of assigned Leads from Dataverse

* Show lead name, company, topic, status, and created date

* Use card layout for each lead

* Add loading state and error handling

  1. Use modern, clean UI design with:

– Responsive layout (works on desktop and mobile)

– Tailwind CSS for styling

– Professional color scheme (blues and grays)

– Proper spacing and typography

– Loading spinners while data is fetching

– Error messages if data fails to load

After providing this prompt to GitHub Copilot, it will generate the complete component code. We can then review the generated code, make any necessary adjustments, and integrate it into our application.

Step 7: Deploy Your Code App

Once the code is complete and the app is running locally, the next step is to deploy the application. For Code Apps, deployment is straightforward. First, build the application by running:

npm run build

After a successful build, execute the following command to push the application to Power Apps:

pac code push

This command will deploy the application to Power Apps. To verify the deployment, go to the Power Apps portal and open the Apps section. The newly deployed Code App will be visible in the list as shown in the image below.

PowerAppsCode

To run the app, click the play button. On the first launch, the application will prompt for permission to access the connected data sources. After allowing the permissions, the application will use those connection references for all subsequent operations.

PowerAppsCode

 

PowerAppsCode

Conclusion

With Power Apps Code Apps, we can now build standalone applications. The real advantage here is the direct access to over 1,500 connectors through the Power Apps SDK. We can connect to Dataverse, Office 365 Users, and other services without writing any authentication code. The Power Apps SDK handles all the security, and each connector respects user level permissions automatically.

We also get complete freedom to design our own UI using any libraries we prefer. The deployment process is simple. Just run the build command and push it to Power Platform with a single command.

In this article, we built a Personal Dashboard that pulls data from Dataverse and Office 365 Users. The same approach works for any application that needs to connect with Power Platform services. The setup is straightforward, and once the project is initialized, adding new data sources is just a matter of running a few commands.

Code Apps provide a practical way to build custom applications within the Power Platform ecosystem while maintaining secure connections and proper access control.

Frequently Asked Questions (FAQs)

What are Power Apps Code Apps?

Power Apps Code Apps are a new application type in Microsoft Power Platform that allow developers to build standalone single-page applications using modern web frameworks such as React, Angular, or Vue. They provide direct access to Power Platform connectors through the Power Apps SDK without requiring custom authentication code.

How are Code Apps different from Canvas Apps and Model-Driven Apps?

Unlike Canvas Apps and Model-Driven Apps, Code Apps:

  • Are fully standalone applications
  • Use a pro-code development approach
  • Allow complete control over UI and application architecture
  • Cannot be embedded into Canvas or Model-Driven Apps
  • Use modern frontend frameworks instead of low-code designers

Do Power Apps Code Apps require authentication setup?

No. Authentication is handled automatically by the Power Apps SDK. Developers do not need to implement OAuth flows, manage tokens, or configure app registrations. All connectors enforce user-level permissions by default.

Can Power Apps Code Apps connect to Dataverse?

Yes. Power Apps Code Apps can connect directly to Dataverse using the Dataverse connector. Developers can perform CRUD operations on Dataverse tables, such as Leads and Incidents once the SDK is initialized.

How do Code Apps access Office 365 user information?

Code Apps use the Office 365 Users connector to retrieve profile details such as name, email, job title, and profile photo. The connector respects the signed-in user’s permissions automatically.

The post Building Standalone Apps with Power Apps Code Apps: Using Dataverse and Office 365 Users Connectors (Part 1) first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • Build AI-Powered Apps in Minutes with Power Apps Vibe: A Complete Guide (Preview)
    If you’ve ever tried building apps with Microsoft Power Apps, you know the process: creating tables, designing screens, adding controls, connecting data, and writing formulas. While the traditional app-building process is effective, it can also be time-consuming and complex. But now, imagine this: You simply describe the app you need, and within minutes, Power Apps Vibe takes over: A complete data model is generated. UI screens are automatically designed. Built-in logic is incorporated. A func
     

Build AI-Powered Apps in Minutes with Power Apps Vibe: A Complete Guide (Preview)

Power Apps Vibe

If you’ve ever tried building apps with Microsoft Power Apps, you know the process: creating tables, designing screens, adding controls, connecting data, and writing formulas. While the traditional app-building process is effective, it can also be time-consuming and complex.

But now, imagine this:

You simply describe the app you need, and within minutes, Power Apps Vibe takes over:

  • A complete data model is generated.
  • UI screens are automatically designed.
  • Built-in logic is incorporated.
  • A functional prototype is ready to go.

All this, without having to drag a single control or write a line of code.

Welcome to Power Apps Vibe—a revolutionary AI-powered app development platform. Unlike traditional app design methods, Power Apps Vibe makes building apps simpler, faster, and more intuitive than ever before.

Instead of spending hours designing screens and wiring logic, Vibe transforms app development into a simple, conversational experience. You describe what you need, and it creates the foundation for your app—data model, UI, navigation, and logic—automatically.

Power Apps Vibe

In this blog, I’ll break down what Vibe is, why Microsoft created it, and how you can start building full-stack apps with nothing more than a sentence.

What is Power Apps Vibe?

Power Apps Vibe is Microsoft’s AI-driven app-building experience, designed to simplify app development. Available now in preview, this feature combines the best aspects of low-code and AI-powered development into a single, seamless interface.

Unlike traditional app-building tools such as Canvas or Model-Driven apps, Vibe functions like a creative partner, helping you bring your app ideas to life faster. Here’s how it works:

  • You describe your app’s requirements in simple language.
  • Power Apps Vibe automatically creates:
    • The data model behind your app.
    • The UI screens you need.
    • Navigation and action flows.
    • The core logic for functionality.

You still have full control to modify or refine any aspect of the app. Think of Power Apps Vibe as a combination of Power Apps, an AI architect, a UI designer, and a developer, all within a single interface.

Think of it as Power Apps + a smart architect + a designer + a developer, all rolled into one interface.

Why Did Microsoft Introduce Power Apps Vibe?

The goal behind Power Apps Vibe is simple: to make app development faster, smarter, and more accessible for everyone, from business users to developers.

Organizations often face challenges such as:

  • Long development cycles
  • Lack of skilled developers
  • Difficulty translating business ideas into working apps
  • Fragmented requirements
  • Slow prototype development

Power Apps Vibe addresses these issues by enabling anyone, whether a business user, analyst, or developer, to rapidly create a solid app foundation. With Vibe, you can skip the time-consuming setup and dive straight into customizing the app for your specific needs.

We can maintain full control for customization, but the time-consuming initial setup is handled for us.

Where Do You Access Power Apps Vibe?

Currently, Power Apps Vibe is available in preview and is not yet part of the standard Power Apps studio. To get started, head over to the preview portal: Power Apps Vibe Preview

Simply sign in with your Microsoft account, and you’ll be greeted with a clean, intuitive workspace. A large prompt box will be ready for your ideas, making it easy to get started.

Power Apps Vibe

To use it, Navigate to:

🔗 https://vibe.preview.powerapps.com

Sign in with your Microsoft account, and you’ll enter a clean, streamlined workspace featuring a large prompt area—ready for your ideas.

How to Build an App Using Vibe?

Step-by-Step Guide to Building an App with Power Apps Vibe

Here’s what surprises most people:

Using Power Apps Vibe feels less like coding and more like having a conversation with a colleague. You describe what you need, and Vibe does the heavy lifting. Here’s how the process works:

Let’s walk through the complete process step by step.

Step 1: Describe the App You Want

In the prompt box, simply describe your app in plain language. You don’t need to worry about technical jargon or formatting. For example:

“I want to build a Time Entry Admin app. Admins should be able to update the Base Pay Code, view a list of time entries, and edit the Base Pay Code only on this screen.”

Power Apps Vibe

No need for complex formatting or technical jargon.
Just describe your app idea as if you were explaining it to a teammate – simple, clear, and conversational.

Step 2: Vibe Generates Your App Plan

Once you submit your prompt, Vibe analyses your requirements and generates a detailed plan. This blueprint typically includes:

  • The tables it will create
  • The fields within those tables
  • The screens your app will have
  • Actions and commands for functionality
  • Navigation flow between screens

Test Prompt:

“Create an app for managing Time Entries. The main screen should list all time entries. When I click a row, take me to a detail screen. Admins should be able to update the Base Pay Code on this screen. Non-admin users should not be able to edit this field.”

Power Apps Vibe

It’s essentially the blueprint of your app. If something doesn’t look right, you don’t need to start over – just refine your prompt. For example:

  • Add an audit field
  • Change the name of this table
  • Make Base Pay Code read-only for non-admins

Vibe instantly updates the plan based on your instructions, making the process feel conversational and effortless.

Step 3: Create the App

Once your plan looks good, simply click Create App.

Vibe now builds:

  • The user interface (UI)
  • Interactive forms
  • The underlying data model
  • Core logic for functionality

This process yields a functional web application that is available for immediate preview.

Power Apps Vibe

Vibe handles all the heavy lifting so you can focus on refining ideas instead of wrestling with syntax.

Step 4: Refine the App Through Natural Language

This is where Vibe feels different from anything we’ve seen before.

You can simply chat with it:

  • “Make the Base Pay Code field bigger.”
  • “Add a dashboard screen with totals.”
  • “Add a search bar at the top.”
  • “Show only records assigned to the logged-in user.”

And Vibe will update the app instantly.

It’s the first time Power Apps feels like a conversation instead of a tool.

Step 5: Save Your App

When you save the app for the first time, Power Apps stores:

  • the app
  • the plan
  • the screens
  • and the data model

All inside a single solution.

It becomes part of your Power Apps environment, just like any other app.

Step 6: Connect to Real Data (Optional)

When you first build the app, it uses “draft data” –  temporary tables that exist only for prototyping.

Once your app is ready for real use:

  1. Go to Data
  2. Connect to Dataverse, SQL, SharePoint, or any supported source
  3. Map the fields
  4. Publish the app again

This step turns your prototype into a production-ready application.

Step 7: Publish and Share

Once everything looks right, click Publish.

Your app becomes live, and you can share it with your team exactly like any other Power App.

Where Power Apps Vibe Really Shines

After playing with it, I realized Vibe is perfect for:

  • Rapid prototyping
  • Converting ideas into real apps within minutes
  • Building admin tools
  • Internal dashboards
  • Small line-of-business apps
  • Automating manual processes
  • Mockups for client demos
  • Reducing the back-and-forth between business teams and developers

It reduces friction. It reduces waiting. It reduces technical complexity.

You still get full control — formulas, data, actions, security, connectors — everything you normally have in Power Apps remains available.

But the start is dramatically faster.

Limitations to Keep in Mind for Power Apps Vibe

Since Vibe is still a preview feature, a few things have limitations:

  • You cannot edit Vibe apps in the classic Canvas app studio.
  • If you export/import the solution, it may break the link with the AI “plan.”
  • It currently supports creating only one app per plan.
  • Existing Dataverse tables aren’t automatically suggested during generation.
  • Some refinements still need to be done manually.

But even with these limitations, Vibe is powerful enough to start real-world projects and prototypes.

Final Thoughts

Power Apps Vibe is one of the biggest updates to the Power Platform in years.
It brings a fresh, modern, conversational style of development that feels more natural and less stressful.

Instead of spending hours designing screens and wiring logic, you can now focus on:

  • Refining ideas,
  • Improving workflows,
  • And delivering value faster.

If you haven’t tried it yet, open the preview today and type the first idea that comes to mind.
You’ll be surprised how quickly it becomes a working app.

Frequently Asked Questions: Power Apps Vibe

1. What is Power Apps Vibe and how is it different from traditional Power Apps development?

Power Apps Vibe is an AI-powered app-building tool that allows you to create full-stack apps simply by describing your requirements in natural language. Unlike traditional Power Apps, which involve manually designing screens and writing formulas, Vibe automatically generates the data model, UI, navigation, and logic. It simplifies app development by transforming it into a conversational, automated process.

2. Can I use Power Apps Vibe without any coding knowledge?

Yes, Power Apps Vibe is designed for users with little or no coding experience. It allows you to create apps by simply describing what you want in plain language. The AI handles the complex aspects of app development, such as data modeling, UI design, and logic, so you can focus on refining your ideas rather than writing code.

3. Is Power Apps Vibe available for all users or only those in certain regions?

Currently, Power Apps Vibe is in preview and can be accessed by users who sign in through the dedicated portal at https://vibe.preview.powerapps.com. While the feature is available globally, its availability might vary based on regional preview settings and Microsoft’s rollout timeline. Keep an eye on updates for broader access.

4. What are some limitations of Power Apps Vibe?

While Power Apps Vibe is a powerful tool, it does have some limitations:

  • You cannot edit Vibe-generated apps in the classic Canvas App Studio.
  • The feature currently supports only one app per plan.
  • Existing Dataverse tables aren’t automatically suggested during the app creation process.
  • Some refinements still require manual adjustments after the initial app is generated.

5. How can I connect my Power Apps Vibe app to real data?

Once your prototype is ready, you can connect your Power Apps Vibe app to real data by navigating to the Data section within Power Apps and linking it to supported data sources such as Dataverse, SQL, or SharePoint. After mapping the fields, you can publish the app again to make it production-ready.

The post Build AI-Powered Apps in Minutes with Power Apps Vibe: A Complete Guide (Preview) first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • How Copilot Studio Leverages Deep Reasoning for Intelligent Support Operations
    Deep Reasoning in Microsoft Copilot Studio enables AI agents to analyze multi-step support scenarios, evaluate historical case data, apply business rules, and recommend well-reasoned actions similar to how an experienced support specialist thinks. AI agents are becoming a core part of customer service operations, but traditional conversational models often struggle when scenarios become complex, like diagnosing a multi-step issue, understanding multi-turn case histories, or recommending the nex
     

How Copilot Studio Leverages Deep Reasoning for Intelligent Support Operations

CopilotStudio

Deep Reasoning in Microsoft Copilot Studio enables AI agents to analyze multi-step support scenarios, evaluate historical case data, apply business rules, and recommend well-reasoned actions similar to how an experienced support specialist thinks.

AI agents are becoming a core part of customer service operations, but traditional conversational models often struggle when scenarios become complex, like diagnosing a multi-step issue, understanding multi-turn case histories, or recommending the next best action.
Microsoft’s new Deep Reasoning capability in Copilot Studio (currently in preview) bridges this gap by enabling agents to think more logically and deliver more accurate conclusions.

This feature equips Copilot agents with advanced analytical abilities similar to how a skilled support specialist breaks down a problem, evaluates evidence, and suggests well-reasoned actions.

How Deep Reasoning Works

Deep reasoning is powered by an advanced Azure OpenAI model (o3), optimized for:

  • Multi-step thinking
  • Logical deduction
  • Complex problem solving
  • Chain-of-thought analysis
  • Context comprehension across long conversations

When enabled, the agent automatically decides when to invoke the deep reasoning model, especially during:

  • Complicated queries
  • Multi-turn conversations
  • Tasks requiring decision making
  • Summaries of large case files
  • Applying business rules

Alternatively, you can instruct the agent to explicitly use deep reasoning by including the keyword “reason” in your agent instructions.

Business Use Case:

Imagine a company that manages thousands of service cases, technical issues, warranty requests, customer complaints, and product inquiries.
Handling these efficiently requires deep understanding of:

  • Historical case data
  • Case descriptions across multiple interactions
  • Dependencies (products, warranties, previous repairs, SLAs)
  • Business rules
  • Customer communication patterns

A standard AI model can answer simple questions, but when a customer or sales representative asks something like:

  • Why was this customer’s case reopened three times?
  • Given the reported symptoms and past activity, what should be the next troubleshooting step?
  • Which SLA should be applied in this situation, and what is the reasoning behind it?
  • Considering the notes from all three departments, what appears to be the underlying root cause?

Your agent needs more than a direct lookup.
It needs reasoning.

This is where Deep Reasoning dramatically improves the experience.

How to Enable Deep Reasoning in Copilot Studio (Step-by-Step)

Setting up deep reasoning in a Copilot Studio agent is straightforward:

Step 1. Enable generative orchestration

This allows the agent to decide intelligently which model should handle each part of the conversation.

Step 2. Turn on Deep Reasoning

When enabled, the o3 model is added to the agent’s orchestration pipeline.

CopilotStudio

Step 3. Add the reason keyword (optional but recommended)

Inside the Agent Instructions, specify where deep reasoning should be applied:

As mentioned in the screenshot below, the word “reason” is used twice to trigger deep reasoning in our custom agent.

CopilotStudio

Step 4. Connect data sources

You can link multiple sources such as:

  • Dataverse Cases table
  • Knowledge bases
  • SharePoint documents
  • Product manuals
  • Troubleshooting guides

Deep reasoning enables the agent to interpret and analyze these materials more effectively.
For this example, I connected a Dataverse MCP server to provide the agent with improved access to Dataverse tables.

CopilotStudio

Step 5. Test complex scenarios

Ask real-world questions like:

  • Analyze the case history and determine the most likely root cause.
  • Based on the customer’s issue description, what steps should the technician take next?
  • Explain why this case breached SLA.

You will notice the agent provides a structured, logical answer rather than surface-level information.

CopilotStudio

You can also verify that deep reasoning was activated by checking the Activity section.

CopilotStudio

Frequently Asked Questions About Deep Reasoning in Copilot Studio

What model powers Deep Reasoning in Copilot Studio?
Deep Reasoning is powered by the Azure OpenAI o3 reasoning model, optimized for multi-step analysis and logical deduction.

When should Deep Reasoning be used?
It should be applied to complex, multi-turn conversations involving business rules, SLAs, historical data, or decision-making.

Does Deep Reasoning replace standard Copilot responses?
No. Copilot Studio dynamically decides when Deep Reasoning is required, using standard models for simpler interactions.

Can Deep Reasoning analyze large case histories?
Yes. It is specifically designed to interpret long conversations and large volumes of contextual data.

Conclusion

By connecting rich data sources and enabling deep reasoning, the agent becomes significantly more capable of understanding complex case scenarios and providing meaningful, actionable responses. When tested with real-world questions, the agent demonstrates structured analysis, logical decision-making, and deeper insights rather than surface-level replies.

This ensures more accurate case resolutions, improved productivity, and a smarter, more reliable support experience.

The post How Copilot Studio Leverages Deep Reasoning for Intelligent Support Operations first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

  • ✇Microsoft Dynamics 365 CRM Tips and Tricks
  • How to Download Large Files from Dynamics 365 CRM Using BlocksDownloadRequest API
    Introduction When working with large files in Microsoft Dataverse (Dynamics 365 CRM), standard download methods often fail due to payload size limits, network interruptions, or memory overload. To address these challenges, Dataverse provides a chunked, block-based download mechanism through APIs such as: InitializeFileBlocksDownloadRequest InitializeAttachmentBlocksDownloadRequest InitializeAnnotationBlocksDownloadRequest This method is the recommended and most reliable way to download large
     

How to Download Large Files from Dynamics 365 CRM Using BlocksDownloadRequest API

How to Download Large Files from Dynamics 365 CRM Using BlocksDownloadRequest API

Introduction

When working with large files in Microsoft Dataverse (Dynamics 365 CRM), standard download methods often fail due to payload size limits, network interruptions, or memory overload. To address these challenges, Dataverse provides a chunked, block-based download mechanism through APIs such as:

  • InitializeFileBlocksDownloadRequest
  • InitializeAttachmentBlocksDownloadRequest
  • InitializeAnnotationBlocksDownloadRequest

This method is the recommended and most reliable way to download large files in Dynamics 365.

Why Use Chunked Download Requests?

Common challenges with large file downloads:
• Timeouts or payload size limits
• Unstable or slow networks (especially in mobile/VPN environments)
• Memory overload when downloading full files at once

To overcome these, Dataverse supports block-based downloads. These requests initialize the operation and return a continuation token and file metadata, enabling files to be retrieved in chunks.

Benefits of this approach include:
• Reliable, resumable downloads
• Optimized memory and bandwidth usage
• Scalable for mobile apps, portals, and external systems

Available Chunked Download Requests

  • InitializeFileBlocksDownloadRequest – For files stored in File or Image columns.
    • InitializeAttachmentBlocksDownloadRequest – For email attachments in the ActivityMimeAttachment table.
    • InitializeAnnotationBlocksDownloadRequest – For note attachments stored in the Annotation table.

How to Use Chunked Download Requests

The download process consists of three main steps:
1. Initialize the download request
2. Retrieve file blocks using DownloadBlockRequest
3. Assemble or save the file locally

Example 1: Downloading File Column Data

C# Code Sample:

var initRequest = new InitializeFileBlocksDownloadRequest
{
Target = new EntityReference("incident", incidentId),
FileAttributeName = "reportfile"
};

var initResponse = (InitializeFileBlocksDownloadResponse)service.Execute(initRequest);
var token = initResponse.FileContinuationToken;
var fileName = initResponse.FileName;
var fileSize = initResponse.FileSizeInBytes;

long offset = 0;
long blockSize = 4 * 1024 * 1024;
var fileBytes = new List<byte>();

while (offset < fileSize)
{
var downloadRequest = new DownloadBlockRequest
{
FileContinuationToken = token,
Offset = offset,
BlockLength = blockSize
};

var downloadResponse = (DownloadBlockResponse)service.Execute(downloadRequest);
fileBytes.AddRange(downloadResponse.Data);
offset += downloadResponse.Data.Length;
}

File.WriteAllBytes($"C:\\DownloadedReports\\{fileName}", fileBytes.ToArray());

Example 2: Downloading Email Attachments

To download an email attachment:

var initRequest = new InitializeAttachmentBlocksDownloadRequest
{
Target = new EntityReference("activitymimeattachment", attachmentId)
};

var initResponse = (InitializeAttachmentBlocksDownloadResponse)service.Execute(initRequest);
var token = initResponse.FileContinuationToken;
var fileSize = initResponse.FileSizeInBytes;

long offset = 0;
long blockSize = 4 * 1024 * 1024;
var fileBytes = new List<byte>();

while (offset < fileSize)
{
var downloadRequest = new DownloadBlockRequest
{
FileContinuationToken = token,
Offset = offset,
BlockLength = blockSize
};

var downloadResponse = (DownloadBlockResponse)service.Execute(downloadRequest);
fileBytes.AddRange(downloadResponse.Data);
offset += downloadResponse.Data.Length;
}

Example 3: Downloading Note Attachments

To download a note file from annotation: var initRequest = new InitializeAnnotationBlocksDownloadRequest { Target = new EntityReference(“annotation”, noteId) }; var initResponse = (InitializeAnnotationBlocksDownloadResponse)service.Execute(initRequest); var token = initResponse.FileContinuationToken; var fileSize = initResponse.FileSizeInBytes; long offset = 0; long blockSize = 4 * 1024 * 1024; var fileBytes = new List<byte>(); while (offset < fileSize) { var downloadRequest = new DownloadBlockRequest { FileContinuationToken = token, Offset = offset, BlockLength = blockSize }; var downloadResponse = (DownloadBlockResponse)service.Execute(downloadRequest); fileBytes.AddRange(downloadResponse.Data); offset += downloadResponse.Data.Length; }

Real-World Example: Case Attachments in Customer Support

Scenario:
Customer support agents frequently upload large evidence files into a Dataverse file column, such as high-resolution screenshots, diagnostic logs, product failure images, or customer-submitted recordings. These files often range from 10 MB to over 100 MB, especially when dealing with technical issues or multimedia evidence.

Challenge:
Using standard download methods often leads to:
• Browser timeouts due to file size
• Failed downloads for VPN/home-office users
• Performance issues when loading large files into memory
• Problems for Power Pages or portal users with unstable network conditions

Solution:
By using InitializeFileBlocksDownloadRequest, the system downloads large attachments in safe, resumable chunks (typically 4 MB each). If the network drops or a chunk fails, only that block is retried not the entire file.

Result:
• Escalation teams can download case evidence without interruption
• Remote and field technicians experience reliable downloads even on hotspot connections
• Large multimedia files no longer freeze or crash the application
• Faster resolution times and improved SLA performance

Conclusion

These chunked download requests offer a scalable, performant, and resilient way to retrieve large files from Dynamics 365 Dataverse. Whether working with file columns, email attachments, or notes, using block-based download logic ensures optimal handling of high-volume content in business-critical applications.

FAQ

1. Can I download files larger than 100MB using this method?

Yes. Block-based download supports very large files.

2. What is the recommended block size?

4 MB per Microsoft guidance.

3. Does chunked download work for Power Apps and external apps?

Yes, as long as the app uses the Dataverse Web API or SDK.

4. Can I resume a failed download?

Yes, you can retry the failed chunk because progress is tracked by offset.

The post How to Download Large Files from Dynamics 365 CRM Using BlocksDownloadRequest API first appeared on Microsoft Dynamics 365 CRM Tips and Tricks.

❌
❌