Home Tags Posts tagged with "Power BI"

Power BI

If you are a consultant like me, you know how hard it can be to access Power BI Admin API or Service Principal. Sometimes, you need to see all the workspaces you have permission for and what’s inside them. Well, I found with MS Fabric, we can use notebooks and achieve it with a few steps:
  1. Get Tennant ID
  2. Connect to Tennant for POWER BI
  3. Call POWER BI REST API, get data and save data in Lake House
  4. Report using Power BI

If you are new to Microsoft Fabric and Notebooks, checkout this series on MS Docs – How to use notebooks – Microsoft Fabric | Microsoft Learn.

Get Tennant ID

There might be simple ways to get Tennant ID using Python, but the below code works to get Tennant ID of the current Notebook. 

import re
conf_list = spark.sparkContext.getConf().getAll()

# filter the configuration parameters to get the driver log URL
filtered_list = filter(lambda x: x[0] == 'spark.tracking.driverLogUrl', conf_list)
value=next(filtered_list,None)
url = value[1]

parts = url.split('/')
index=parts.index('sparkui') 
TennantID=parts[index+1]
print(TennantID)

Connect to Tennant for Power BI

First, I need an access token, so I use the authority URL to authenticate with my login. This gives me the workspaces I can access, but you can also use Azure APP and Service Principal authentication.

# Import the necessary libraries
# ! pip install adal
from adal import AuthenticationContext
import requests# Set the required parameters
TENANT_ID = TennantID
CLIENT_ID ='1aea3f97-edc6-4453-a59b-b88b0b803711'# this is Power BI Client Integrations client ID
RESOURCE = 'https://analysis.windows.net/powerbi/api'# Set the authority URL and create an authentication context
authority_url = f'https://login.microsoftonline.com/{TENANT_ID}'
context = AuthenticationContext(authority_url)# Acquire a user code and device code to authenticate the user
code_response = context.acquire_user_code(RESOURCE, CLIENT_ID)
print(code_response['message'])token_response = context.acquire_token_with_device_code(RESOURCE, code_response, CLIENT_ID)
access_token = token_response['accessToken']

The code above uses the Power BI Client Integration Client ID to connect to the service. It prompts me to enter a code to authenticate myself when I run it. This is fine for occasional or manual scripts but not very convenient otherwise.

Another option is to create an app and a service principal in Azure. This lets you authenticate and authorise your app to access Azure resources. To learn how to do this, check out this link –Power BI REST APIs for embedded analytics and automation – Power BI REST API | Microsoft Learn

!pip install azure.identity
from azure.identity import ClientSecretCredential
import json, requests, pandas as pdtenant_id= TennnatID
client_id = 'Your ClientID' client_secret='Your Client Secret' # Good to not hardcode client secret here, try to see how you can keep in Azure Key Valult safely
ResourceURL= 'https://analysis.windows.net/powerbi/api' APIURL='https://analysis.windows.net/powerbi/api/.default'
auth = ClientSecretCredential(authority = 'https://login.microsoftonline.com/',tenant_id = tenant_id,client_id = client_id,client_secret = client_secret)
access_token = auth.get_token(APIURL) access_token = access_token.token# print(access_token)

Following Call Power BI REST APIs and save them in Lakehouse

In this example, I’m only fetching the workspace ID, the dataset ID and the report ID, but you can use any other endpoint you can access. Just make sure you have the proper permissions and authentication tokens. The code is in Python, but you can use any language that supports HTTP requests.

# Set the base URL and headers for the API call
Baseurl = 'https://api.powerbi.com/v1.0/'#myorg/groups'
headers = {'Authorization': f'Bearer {access_token}'}
# print(headers)# Get a list of all workspaces in the organization
response = requests.get(Baseurl+'myorg/groups', headers=headers)if response.status_code == 200:
workspaces = response.json()['value']
# get list of all workspaces and their IDs and keep in a dataframe
import pandas as pd
for workspace in workspaces:
# print(workspace['name'], workspace['id'])
workspaces_df = pd.DataFrame(workspaces)
workspaces_df.head()
display(workspaces_df)
else:
print(f"Error: {response.status_code} - {response.text}")#create a dataframe to hold the list of all workspaces
workspaces_df = pd.DataFrame(workspaces)
workspaces_df.head()workspaces_df = workspaces_df.fillna('')
# replace spaces in columns with _
workspaces_df.columns = workspaces_df.columns.str.replace(' ', '_')
workspaces_df.head()#Create a spark dataframe from the Datasets_df
workspaces_spark_df = spark.createDataFrame(workspaces_df)
spark.sql("DROP TABLE IF EXISTS Workspaces")
workspaces_spark_df.write.format("delta").mode("overwrite").saveAsTable("Workspaces")

Datasets=[]
# Get a list of all datasets in the organization
for workspace in workspaces:
    # print(workspace['name'], workspace['id'])
    response = requests.get(Baseurl+f"myorg/groups/{workspace['id']}/datasets", headers=headers)
    if response.status_code == 200:
        datasets = response.json()['value']
        # get list of all datasets and their IDs and keep in a dataframe
        for dataset in datasets:
            # print(dataset['name'], dataset['id'])
            Datasets.append(dataset)
            #add workspace name and ID to the dataset
            dataset['Workspace Name'] = workspace['name']
            dataset['Workspace ID'] = workspace['id']


    else:
        print(f"Error: {response.status_code} - {response.text}")
# display(Datasets)


# Create a dataframe from the Datasets list
Datasets_df = pd.DataFrame(Datasets)
# select key columns
Datasets_df = Datasets_df[['name', 'id', 'Workspace Name', 'Workspace ID','configuredBy','webUrl','createdDate']]
# rename name and id to Dataset Name and Dataset ID
Datasets_df = Datasets_df.rename(columns={'name':'Dataset Name', 'id':'Dataset ID'})
# replace the null values with empty string
Datasets_df = Datasets_df.fillna('')
# replace spaces in columns with _  
Datasets_df.columns = Datasets_df.columns.str.replace(' ', '_')
Datasets_df.head()


#Create a spark dataframe from the Datasets_df
datasets_spark_df = spark.createDataFrame(Datasets_df)
spark.sql("DROP TABLE IF EXISTS Datasets")
datasets_spark_df.write.format("delta").mode("overwrite").saveAsTable("Datasets")
Reports=[]
# Get a list of all reports in the organization
for workspace in workspaces:
# print(workspace['name'], workspace['id'])
response = requests.get(Baseurl+f"myorg/groups/{workspace['id']}/reports", headers=headers)
if response.status_code == 200:
reports = response.json()['value']
# get list of all reports and their IDs and keep in a dataframe
for report in reports:
# print(report['name'], report['id'])
Reports.append(report)
#add workspace name and ID to the report
report['Workspace Name'] = workspace['name']
report['Workspace ID'] = workspace['id']


else:
print(f"Error: {response.status_code} - {response.text}")
# display(Reports)


# Create a dataframe from the Reports list
Reports_df = pd.DataFrame(Reports)
# select key columns
Reports_df = Reports_df[['name', 'id', 'Workspace Name', 'Workspace ID','webUrl','datasetId','datasetWorkspaceId']]
# rename name and id to Report Name and Report ID
Reports_df = Reports_df.rename(columns={'name':'Report Name', 'id':'Report ID'})
# replace the null values with empty string
Reports_df = Reports_df.fillna('')
# replace spaces in columns with _
Reports_df.columns = Reports_df.columns.str.replace(' ', '_')
# Reports_df.head()


#Create a spark dataframe from the Reports_df
reports_spark_df = spark.createDataFrame(Reports_df)
spark.sql("DROP TABLE IF EXISTS Reports")
reports_spark_df.write.format("delta").mode("overwrite").saveAsTable("Reports")
 
 In the code snippet above, I started by defining the Base URL to add the REST API URLs to it later. Then, I used the response status to handle any possible API issues. After that, I stored the response in the panda’s data frame and converted it to a spark data frame before saving it in the Lakehouse as a delta table. I also removed spaces from the column names because Spark tables don’t allow them. This is the same pattern that I followed for all the other APIs. 
 

Create Report

When I create a Lakehouse in Microsoft Fabric, it will automatically create some default options.

When I click on the ellipsis of two default options, Dataset(Default) or SQL endpoint, there are various ways to analyse data.

Now, you might be overwhelmed with all these different visualisation options available. Each one does serve a different purpose if you know the difference between them. But for my use case, I want a simple list of all workspaces, reports, who created them, configured them, etc. So,

I selected the Dataset(Default) option and clicked Create Paginated Report.

I dragged and dropped the columns I wanted to see in the table, such as workspace name, report name, owner, created date, etc.

That’s it! I now have a simple and handy list of all my workspaces in Microsoft Fabric. I could export, share or distribute this report to further audiences.


This is a simple use case, but it has much potential. Instead of using the default dataset, I could have created some relationships. I could have even used TOM to create a Dataset, relationships, measures, etc. I didn’t have enough time to explore that. Additionally, I wasn’t sure how to do it in Python. Maybe someone can try it and let me know how it goes. 

I was curious; I started everything with embedded API and how it could read data from different sources in Power BI. I wanted to document Workspaces, Datasets and Reports with all the details with Embedded API. But, I encountered a problem: “Embedding a DirectLake dataset is not supported with V1 embed token“. Maybe this will change, and then we can do amazing things. We could document various artifacts in a workspace, create reports and publish them in Power BI App—an intelligent way to document. For now, I used REST operations.

Another option is to use the Admin API and scanner API, start a trace and get the dataset metadata, document measures, etc. I just wanted to show these notebooks open doors to many amazing things. 🙂

Until next time,

Prathy 🙂

0 comment
0 FacebookTwitterPinterestEmail

I had been following the hints and clues on Twitter, where some Power BI team members were changing their job titles to Synapse. I was curious and hopeful about what they were working on, but I also didn’t want Power BI to lose momentum. When I saw the leadership changes, I felt reassured that whatever they did would be amazing.

Fast forward, at Build, Microsoft announced the long-awaited Microsoft Fabric. What is it? It’s a SaaS Data Fabric platform with a common data lake called OneLake. It’s a one-stop shop for all the analytics needs of every enterprise. With fabric, there is no need to stitch together services from multiple vendors and all data is saved as delta parquet files. No more vendor lock-in or proprietary data formats!

The data Fabric concept is not new; as per datanamiNoel Yuhanna has been at the forefront of Data Fabric. In his words, “A data fabric is essentially an abstraction layer that links a disparate collection of data tools that address key pain points in big data projects. A data fabric solution should deliver capabilities in the areas of data access, discovery, transformation, integration, security, governance, lineage, and orchestration. It should also provide self-service capabilities, as well as some graph capabilities to identify connected data.

By providing a way to bring these data management capabilities to bear on data spanning all these silos, a data fabric can help alleviate core data management challenges holding companies back from higher-level data use cases, including advanced analytics and AI in the cloud.”

Microsoft Fabric takes this concept to the next level by integrating it with Power BI, Microsoft Office and Fluent UI frameworks, Integration with Office and Power BI makes the time and steps to data insights very flexible. Fluent UI provides a consistent and beautiful user experience across all artefacts. I think Microsoft Fabric is the ultimate solution for anyone who wants to leverage the power of data in the cloud. Whether you prefer a centralized or distributed approach, Microsoft Fabric can handle it all. You don’t have to change your existing skills, and you can use the tool or language you are comfortable with. There is plenty of information on MS Docs, and I am sure many people will be talking about it. I want to take my take on Microsoft Fabric from an ex-Microsoft SQL Server BI developer.

You see, I have a long history with Microsoft SQL Server and BI. I used to work with SSIS, SSRS and SSAS to create amazing data solutions. I loved how I could integrate, transform and analyze data using these tools. I felt like I had everything I needed in one place: SQL Server. Ever since, no matter what tool I worked with, I still somehow try to connect the dots back to MSBI. This might be because it helps me to understand the topic better or not to get intimidated by changing world. 

Fabric sounds game changer, but as an ex-MS SQL BI developer, I find it familiar. In my older days, I accumulated data in SQL Server databases. I made ETL or ELT using SQL-stored procedures and my first love SSIS. All my data integration tasks were mainly done using SSIS but sometimes with a few bits of C# and hardcore SQL Procedures. I used SSAS to create dimensional models; all calculations were traditionally created by MDX and then DAX.

I created beautiful SSRS reports with cool features like data-driven parameters, conditional formatting, look-up sets. I kept my end users on top of their KPIs using SSRS subscriptions. Not only end users, hardcore IT guys knew when the server was hitting limits when jobs were failing, even some occasional happy birthday emails using SQL Server Task Manager and sometimes with a bit of help from the Windows Server task manager. Whatever I did, I kind of moved, transformed and analysed all the data on my SQL Server. There were occasional intruders like Oracle, My Sql etc but I was faithful to SQL Server.

But times have changed, and so has the data landscape. Now we have more data sources, more data types, and more data challenges. We need to be able to handle big data, streaming data, unstructured data and more. We need to be able to scale, secure and optimize our data pipelines. We need to be able to deliver insights faster and easier.

That’s where Microsoft Fabric comes in. It’s a platform that lets me connect all these different data sources and destinations. If I have an existing data platform, nicely formatted and all; I used to use linked servers, now I would use Shortcuts on Fabric. Back then, used to access data files with a mapped network drive on Server. Now I would probably mount my storage on Fabric. Microsoft Fabric lets me connect all kinds of data sources and destinations. Whether I have a well-structured data platform or a bunch of files on a server, I can easily access them with Shortcuts and Mounts on Fabric.

For Data Integration, I used a combination of SSIS and SQL Server functionalities like stored procedures and functions. Now on Fabric, I can use Data Pipelines and Data Flows Gen-II to create visual and intuitive workflows for my data transformations. Or I can use Notebooks to write code in any language I want. I think I will be mostly using Pipelines and Data flows Gen-II. I am a more visual person.

I haven’t played enough, but it looks like, I can create multiple data flows and call those data flows from Data Factory pipelines. That would be really useful for scenarios like For Each Loop. 

Next comes methodologies or frameworks of data storage. The proven approach then was ETL – Extract Raw data from operational data sources, Transform and store in a Transactional layer, then Load into a Data warehouse; in a presentable format for further data analysis. I think the old ETL is now Medallion Lakehouse architecture. For larger data warehouses we used to create Databases schemas or data marts for each department or domain, and now that’s domain-based data mesh architecture. 

Microsoft Fabric supports any data storage framework or architecture I need. I can use Workspaces and Domains to organize my data into different stages and domains. I can follow the Medallion Lakehouse approach or create custom data marts for each department or domain.

Next data analytics, in the past, we had three options for creating an analytics layer: SQL Views, SQL DW and SSAS Model. The choice depended on many factors, such as the urgency, the use case, the budget and the developer’s skills. But now, with Fabric, which gives us more flexibility and power. We can use gold-level lakehouse tables for fast and reliable data access, or we can build a Warehouse with custom measures and calculations, or we can create a Power BI Dataset for interactive visualization. 

In my opinion, the priority would be time to insights and performance. Using the benefits of DeltaLake and one copy would be my highest priority. But again, there are best practices and client scenarios. It’s not always easy to follow a recommended framework.

But that’s not all. Analytics is not just about making nice reports with actionable insights. We are in the age of AI. We want those insights to reach the end users in the most effective way possible. In the past, I used SQL Task Manager and Windows Task Manager to automate things. I used SSRS and Excel for further analysis. I relied on support teams for ad hoc report requests. In the modern era, different export options, and self-service BI is not enough. We need scalable platforms with no propriety locks to use the final output for further analysis. Gold-level data is not an end. We may want to analyse the final dataset created by the self-service end user. We may want to add pixie dust of Open AI on that final data frame created by the citizen developer. That’s where Fabric is perfect. I am no longer limited to the final stage of Load in my ETL framework or Gold layer of Medallion Lakehouse architecture. We are emerging into the world of creating Platinum and diamond layers. 

Microsoft Fabric, there are so many elements I don’t know yet. There are these whole data science bits, I don’t understand yet. But as an ex-MS SQL BI developer, I find it familiar. It reminds me of how I used to work with SQL Server and BI tools. It’s like a modern version of them. It’s like a new way of doing old things. It’s time to bring back Data Platforms and analytics platforms together. Together in Cloud 🙂

What do you think? Are you excited about Microsoft Fabric? Do you have any questions or comments? Let me know in the comments section below. 

Until next time, Prathy 🙂

0 comment
1 FacebookTwitterPinterestEmail
USING TREE MAP AS LEGEND ON A PAGE in Power BI

I recently worked on two projects where the client wanted to show multiple metrics sliced by the same categorical data. For example, seeing how various metrics are performing over different regions or different product groups. A use case like this can be achieved in many ways; probably the best approach is to use small multiples functionality or to keep it simple, five same visuals with different metrics.

Let’s look into it with energy consumption data. Here, I want to show metrics 1 to 5 on different income levels over the years.

Five different visuals

When you know exactly how many categories you have in the requirement and how you want to display your data, then going for a certain number of visuals is my favourite approach.

UntitledImage

Using Small multiples

When there is no clarity about the number of categories, the small multiples feature is significant. For this use case, I went for two columns. Due to five categories, I get an extra blank space. It doesn’t look that bad, but I would like more control over the placement of visuals on the canvas. 

SmallMUltiples

When I compare Five individual visuals to small multiples, small multiples may have better performance from the number of queries point of view, and it also looks a bit tidier with one legend, one axis.

To have the best of both, we can use TreeMap visual as a legend

Tree Map visual as a legend

For the tree map, I used the income level column as a category and distinct count of the same column as value. I have turned off data labels, header icons and tool tips. I ensured I used the same “Max Axis” value for all visuals, making the comparison seamless. 

This approach is cleaner than showing the same legend for each visual and serves the same purpose as the usual legend: a user can identify what is what based on colour or shape and interact by clicking on the legend. 
https://prathy.com/wp-content/uploads/2022/06/2022-06-25_14-49-20-1-3.gif

The benefits of this approach are that your report stands out, a nice big legend is easy to interact with on touch screen devices and looks sleek rather than repeating the same legend. I hope the blog inspires someone out there.

Until next time,

Prathy 🙂

0 comment
3 FacebookTwitterPinterestEmail

In one of the projects I was working on, I received feedback saying it is hard to understand how many items they have selected in a slicer, and it is not the first time I came across this. It is a valid point, especially when you have quite a few items in a slicer, you use a search bar to look for items, you select a couple, but you were not sure how many were selected.

Obviously, there are many solutions to one problem. I have seen many reports displaying custom labels to handle this situation. Still, after some research for a better UX, I found this example on the dribble, which looked like an excellent design method.

shot.gif (800×600) (dribbble.com)

And 

And doing this in 

 

The logic here, I will use a DAX measure to figure out how many items are selected and count those Items to display in an object. I can display this many ways, but I went for a Shape. Having shape allows me to use a shape type as a background; I can use a dynamic DAX expression to show the value. The only thing to note is, my DAX expression has to be a string type. Hence, I created my measure like below. Next is the slicer visual. By default, I enabled the slicer header and used an empty character as text in the slicer header. If I keep the slicer header off, whenever I hover over a slicer, it jumps down/up to make more options visible. If I keep the slicer header on, that won’t be an issue. The only way to have the slicer header off with no title and no tooltip to display is by using Empty Character. So, I used an empty character as the title for the slicer header. Then I went to change the title to DAX measure. And then I grouped altogether, slicer and the shape. Now when I select multiple items, it nicely shows how many items were selected. When none is selected, it shows all the items available in the slicer. In this example, I don’t have many values, so a nice round shape works but based on several results, I may go for different Shape types and keep my shape. But to make it work perfectly, I mean to make this shape to show the right number irrespective of other visuals cross-filtering can be a tiring job. One way is to make all the rest of the visuals do not interact with this shape. That can be time-consuming when you have a lot of objects on a page. The other way to handle this is by creating a new table and creating a relationship. So, in this example, I have Artists and their Total streams. I am using Artist in my slicer. When I select an Artist in another visual, by default, my measure also gets filters. If I create another table with a distinct Artist use that in my slicer, and use that column to count, and in all other visuals, I use the Artist from a different table; then I get the result I want.1. Created an Artist table2. Create a relationship between my streams table and Artist table3. Update my slicer to get Artist names from Artist table4. Update measure to get Artist name from Artist table now. Cross-filtering doesn’t impact the value I’m showing. I hope this inspires someone out there.

4 comments
2 FacebookTwitterPinterestEmail

This is a blog post series to show various ways of creating parts of landing pages for better UX and navigation

Tiles using, Images, Power BI buttons, hover effects and all the above

Either it is Power BI report or some web page, tiles are one of the best-known ways to add the user experience to landing pages. When Tiles has images, dynamic content, actions, they get more practical plus engaging. Let’s look at some ways of creating tiles in Power BI

Approach 1

This is probably the most common approach. Import an image to Power BI Desktop, add an action to go to the bookmark, in this case, a detail page and add some text under the image. This text can be dynamic with DAX.

UntitledImage

Approach 2

We can extend the example by adding a button when the user hovers over the button, a text appears giving the context of the action or detail.

2020 04 22 22 44 27 1

This can be achieved by putting an empty button on top of the image. Then update the properties of button to show text only “On Hover”

Approach 3

In the above example text is static, but we can make it dynamic.

2020 04 22 22 53 29 1

This can be done by using the expression functionality of Button Text property of the button. That gives the functionality to show dynamic text.

Approach 4

Having images are likeable but too many are never right, that’s when instead of going for an image, we can just show Button with Text. But on hover over we can add more context, like below:

Here, I am just using a group of visuals and an image to show details. I added static text with a URL feel.

Approach 5

Another way is to show static or dynamic text in the button, but instead of showing over the image, actually show under the image as a description.

To achieve this result, we use same text properties as shown in the above approaches but instead of using top or middle vertical alignment, go for bottom vertical alignment. Also, make the blank button size bigger than the image size so the text shows right under the image on hover over.

More landing pages tips to follow, hoping these examples leaves you with some inspiration.

You can download the PBIX here and view here

Prathy 🙂

0 comment
3 FacebookTwitterPinterestEmail

Like many things in Power BI, Power BI Tool Tips functionality evolved so much. I probably do a post on Tool Tips sometime soon. Today it’s about visual header tooltip. While I was working on a project recently, I came across a new ToolTip icon under visual headers. Since then, I have been using a lot, so I thought to blog about it.

Let’s start with what is Tool Tip:

A tooltip is a graphical user interface (GUI) element used in conjunction with the cursor or mouse pointer to display information about an item without needing to click on it. The typical scenario for summoning a tooltip is to hover the mouse cursor over another GUI element such as a tool icon in software application, and it is also prevalently used in websites.
A tooltip is also known as a hint, infotip or screentip.

I think in Power BI, visual header Tool Tip Icon works more like a hint, info tip or screen tip. You can enable this on each visual under Visual Header in Visual properties. This blog post explains how to enable it – https://powerbi.microsoft.com/en-us/blog/power-bi-desktop-june-2019-feature-summary/#headerTooltips

What I want to talk about is use cases:

  • Visual annotations
  • Business logic explanation / Calculations
  • How  to navigate around the visual
  • Show more data related to visual
  • Anything or everything else

Visual annotations: 

When we look at data journalism posts, most of the times they have annotations, explaining what visual showing or talking about measures. Again most of these data storeys are used for paper. But in the digital world, we do see these annotations more interactively. It is nice to have this kind of lil annotations for everyday reporting as well, and Tooltip Icon can be used for that purpose. Another thing is using canvas space wisely, it is important, and having this kind of hint helps us on saving the canvas space.

So following visual, I am using the ToolTip icon to do annotations. When the user clicks on the header tooltip icon, it will show annotations related to visual

Business logic  / Calculations explanation:

Another good use case is to explain business logic. In the BI world, everyone works towards a single version of the truth, but many times it’s illusory. Another silly thing is naming conventions, like Microsoft products, in the business as well one metric may mean different things in different teams. So having a bit of explanation about metric can be very very handy and Tooltip icon can be used for it.

In the following visual, I am showing how the total got calculated using the Tooltip icon.

How to navigate around the visual:

Not all visuals are as self-explanatory as helpful bar charts. Many need explanations, not only visuals but with all other extra features like drill down, drill through, Tooltips, etc benefits with a bit of guidance. Again, the ToolTip icon comes handy here.

In the below example, I am using the tooltip icon to show how to navigate around the visual

Show more data related to visual:

Simplicity is the key! When you want to focus on one thing, then showing only that helps. But I am sure there is always this one business user who wants to see that one other metric on the report page. 

For example, in the below visual, I want to highlight the trend lines, but the business user also wants to know the number of rows in Male and female. Then that could be something I can show using Tooltip icon

Or

Anything else you want to display 🙂

Hope you enjoyed reading; let me know how you are using this functionality…

Till next time,
Prathy 🙂

5 comments
7 FacebookTwitterPinterestEmail

Power BI Bookmarks, Synced Slicers and Selections; these features transformed the way we report using Power BI. Now, most of the business users want reports with actionable interactivity. Well, it has pros and cons; today’s post is not to discuss that but to talk about Synced Slicers.

Synced Slicers were one of the most requested features. As the name says, this functionality enables you to Sync slicers among various Power BI Report pages, and you can manage the properties of it under Sync Slicers Pane.

Creating Synced Slicers:

For me there are two approaches to carry out Sync Slicers:

  1. Slicer First – Create Slicer, then sync to other pages
  2. Report First – Design the report page then sync. If you use the same field on different pages, Sync Slicers automatically detects and come up with a dialogue saying do you want to sync.

I usually prefer to go for Slicer first. If I knew I need Synced Slicers, then I will try to be consistent with the report layout and position of slicers on each page. With Slicer first approach, we create Slicers then using Sync Slicers pane; we push the Synced Slicers to the pages we want. This method helps you to place the slicer exactly at the same place on all the page. Not only place, but it also keeps the formatting too.

If I don’t want to be consistent with formatting and place of the Slicer Viz then I will go for the second method. Where I place the slicer where ever I want then Sync using Synced Slicers Pane.

That’s it, it is straightforward and works like a charm. But what if you want to Sync within a Page? You may ask why would you ever want to Sync within the Page. We will look at it in my next blog post, for now, let’s assume for a valid cause. That’s when Advanced Options comes to the rescue.

Synced Slicers Advanced Options:

You can find advanced options in Sync Slicers window, just below all pages.

As per Microsoft docs:

“This feature lets you create a custom group of slicers to keep synchronized. A default name is provided, but you can use any name you prefer.

The group name provides additional flexibility with slicers. You can create separate groups to sync slicers that use the same field, or put slicers that use different fields into the same group.”

First, let’s look at creating groups to sync slicers that use the same field. The use case Syncing within a page, we can easily use the group functionality to do this.

For example, here I have two slicers, both slicers come from the same field.

If I don’t have them in a group, when I select using one slicer other slicer gets cross filtered like below

But if I keep them in a group and choose Sync field changes to other slicers, I won’t have that problem. This enables me to have two synced slicers within a page.

Next Creating groups to sync slicers that use different fields. This was interesting, You can add slicers with different fields into one group, and all the pages which have a slicer with this group will sync. I didn’t understand it quickly, but with the help of Will Thomson, I understood the beauty of it.

For example, consider having a sales table with Order Date, Delivery Date and Product. You want to find all products which were purchased and delivered on the same day; then you can use this functionality.

For example here I have a table with Order Date, Delivery Date and Product

Order Date

Delivery Date

Product

1 January 2019

2 January 2019

Apples

2 January 2019

10 January 2019

Pine Apple

3 January 2019

3 January 2019

Avocado

12 January 2019

28 January 2019

Apples

15 January 2019

15 January 2019

Pine Apple

If I create a report like this:

If I don’t keep both of those slicers in the same group, by default my report will interact like below:

But if I keep both slicers under one group, my report will behave differently:

An intresting thing to notice here is when two fields are in same group, it basically add the data from one slicers to another one. Sounds a bit tricky but it does work.

We can see if a slicer is Synced using Pages or group by looking at Sync Slicers pane. In the SYNC SLICERS pane, page synced slicers will be highlighted with Yellow tick box and Grouped Synced Slicers will be highlighted with Grey tick. Also, note a slicer can be part of the group and can also be synced page wise. I also noticed Add and Sync all pages in one click. I never knew this exists.

 

Summary:

I use sync slicers all the time. My clients love this functionality. Sync slicers are not great for data analysis but great for story telling. Will I use Slicers grouping functionality? I am not sure. But it’s definitely worth knowing the feature.

Keep Smiling,
Prathy 🙂

4 comments
1 FacebookTwitterPinterestEmail

In my current project, one of the user requirement was to have a filter on the Year Slicer. To explain in detail, we have various measures to show metrics for current and earlier measures. For example 2016,2017 and 2018. In 2016, we always have blank values for Last Year metrics, having empty values don’t tell the story well. So to tell the story, we need to pull three years worth of data but display only two years in the Slicer. The easiest way to handle this situation would have had a visual level filter on the Year slicer.

Power BI Slicers doesn’t support Visual Slicers. However, with the help of Selection Pane and Sync slicers, I did a quick workaround. I created another slicer for Year attribute, let’s call it Slicer-0. Updated the Visual Interactions of the original slicer (let’s call it Slicer-1) to not to filter the Slicer-0. As I have Synced Slicers on all pages, so I synced Slicer-0 on all pages. And I made sure; I updated the Visual Interactions on all Pages. Then hidden the Slicer-0 on all pages

Things to remember:

A hidden slicer means hidden logic in the Model. Whenever I design Power BI Models, I try to keep up a page with Designer Notes by specifying things like this. Basically, things which are helpful during troubleshooting. So if you are using hidden slicers, make sure you note it somewhere.

Another thing to note is, under Filters and Slicers affecting this Visual will show extra slicer information in the Pop-Up. If you want you can disable the Filter Icon under Visual Headers. But I prefer having the Filter option enabled.

Prathy 🙂

1 comment
0 FacebookTwitterPinterestEmail

It has been a while since I written a blog post, even my ten years old daughter started teasing saying “Motionless blog”, so decided to get back to blogging with a short and simple blog post based on a question I received about one of my Power BI portfolio report Global Landslides Data. Few people asked me how I created the slicer in this report,

So how I created the Dynamic Legend in Power BI Visual, Line Chart :

I was using slicer as a button to dynamically choose the legend I wanted to show in the line Chart. In fact, it was relatively straightforward:

First I created an aggregated table like below

Table =
UNION(
//Incidents
SUMMARIZE('Global Landslide Data'
,'Global Landslide Data'[ID]
,'Global Landslide Data'[Date]
,"Slicer","Incidents"
,"Incidents",'Global Landslide Data'[Incidents]
,"Distance",BLANK()
,"Injuries",BLANK()
,"Fatalities",BLANK()
)
,//Distance

SUMMARIZE('Global Landslide Data'
,'Global Landslide Data'[ID]
,'Global Landslide Data'[Date]
,"Slicer","Distance"
,"Incidents",BLANK()
,"Distance",CALCULATE(SUM('Global Landslide Data'[Distance]))
,"Injuries",BLANK()
,"Fatalities",BLANK()
)
,//Injuries

SUMMARIZE('Global Landslide Data'
,'Global Landslide Data'[ID]
,'Global Landslide Data'[Date]
,"Slicer","Injuries"
,"Incidents",BLANK()
,"Distance",BLANK()
,"Injuries",CALCULATE(SUM('Global Landslide Data'[Injuries]))
,"Fatalities",BLANK()
)
,//Fatalities

SUMMARIZE('Global Landslide Data'
,'Global Landslide Data'[ID]
,'Global Landslide Data'[Date]
,"Slicer","Fatalities"
,"Incidents",BLANK()
,"Distance",BLANK()
,"Injuries",BLANK()
,"Fatalities",CALCULATE(SUM('Global Landslide Data'[Fatalities]))
)
)

The logic is to create a table with the DAX function UNION. Each Table expression in UNION function represents a value of slicer. Apart from that slicer related value, all the rest of the values are blanks.  It is key to have them as blanks than zero’s, we don’t see any data.

Then I chose the slicer column as the value in my Slicer visual and allowed slicer to have multiple selections. Then in the visual, I have used all measures as values, so based on the slicer selection it shows the trend line.

You can see it in the report here – https://app.powerbi.com/view?r=eyJrIjoiN2I4YWI4MjMtYmIyYy00ZjRkLWFjYTktZjM1ZjIwODk0ZjkzIiwidCI6IjM4YmU3MjU5LTMyMGQtNDZkYy04YzNiLWNkZDA5OTM1NGExMyIsImMiOjh9

Till next time,

Prathy 🙂

4 comments
2 FacebookTwitterPinterestEmail
ADDING A VERTICAL LINE OR MARKER TO A CHART IN POWER BI

A business user came to me asking “how can I get a vertical line on Line Chart in Power BI like my Excel report?”. I was like, “It’s straightforward. We have this analytics pane which lets you add so many types of lines 😊 “. But, quite quickly I understood, I cannot draw a vertical line as we do in Excel or SSRS.

Adding a vertical line or marker to a chart in Power BI

Data looks like below:

The user wants a marker, ideally a line to point those events on the chart. I cannot think of an out the box option, but there are some workarounds. Let’s look at adding a line using Combo Chart. This chart clearly shows events as bars and tooltips can define what kind of event it was. I would prefer to change the bar width. It’s a bar; it’s not exactly a line; however, it fits for the purpose.

 

Next using Markers:

With latest Power BI update, we can set line stroke width to zero. So I decided to use that feature to highlight the events on the chart. It’s a simple Line Chart with one axis and two values. One value is to show the Value column and other to show Count of Event. My chart looks like this:

Then I updated X Axis type to Categorical (Otherwise, Markers functionality will not be available). Then under Shapes, set Show marker to On, followed by Customize series option to On. Then I turned off marker for Value column, On for Count of Event.

Then I decreased Stoke width to 0 and increased Marker size to 16. Now the chart looks like this:

Tooltip don’t make much sense, but by using new report tool tip pages functionality we can have much more helpful and user-friendly tooltip like below:

Hope this post helps someone out there
Prathy 🙂
3 comments
3 FacebookTwitterPinterestEmail
Newer Posts