Power BI Routing Visual with Two Lines of R

Objective

Although the out of box Bing and ESRI maps in Power BI can visualize most business requirements, trying to visualize routes or shipping lanes for the transportation industry can be challenging. Fortunately in Power BI we can expand our visualization palette by using either custom visuals or R visuals.

In this blog post we will look at publicly available Flight Data and determine routes that have the highest likelihood of cancellation. This could easily be translated to shipping or transportation scenarios.

You can get the final solution from my github repository or you can download the “Airports” data set from http://openflights.org/data.html and the “Flights” data set from Engima.IO

NOTE: Engima.IO is a repository for public datasets. It is free but requires you to create a login to use it. I enjoy working with the enigma-us.gov.dot.rita.trans-stats.on-time-performance.2009 as it is rather large at 2.4 GB.

Although the above visual only requires two lines of R code to be written, there are two hurdles to get over first: Ensuring R and the required libraries are installed on your desktop, and doing the data prep in the query editor to create the R data frame in the format that is expected.

Installing R and ggmap

There is already well documented guidance on installing R to be used with Power BI on the Power BI Documentation site. Once this installation has been complete, we need to get ggmap and other supporting R libraries installed as well.

I prefer going to the RGui command line (just search for “RGui”) and perform the following command:

Doing this in the R console will automatically download any dependent packages as well. If you performed this line in a Power BI visual directly it would not install the other required packages and you will get an error when you run the solution.

Data Prep

In Power BI, lets first bring in the airports data CSV file we downloaded from http://openflights.org/data.html. The important columns in this data set are the 3 letter airport code and the latitude and longitude of the airport. You can include the other fields for more detail as I am showing below, however they are not necessary for us to achieve the R visual above.

Next import the flight data that was downloaded from Engima.IO for 2009. This data is extremely wide and a lot of interesting data points exist, however we can simply remove a large portion of the columns that we will not be using. Scroll to the right Shift+click and right click to Remove Columns that start with “div”. Alternatively you can use the “Choose Columns” dialog to un-select.

To reduce the number of rows we will work with, filter on the flightdate column to only retrieve the last 3 months.

Shaping the Data

We now need to shape the data for the R visual. To only require 2 lines of R, the data has to be in the following format

index Direction Latitude Longitude
1 Origin 41.97 -87.9
1 Destination 33.63 -84.42
2 Origin 41.73 -71.43
2 Destination 33.63 -84.42
3 Origin 35.21 -80.94
3 Destination 33.63 -84.42

We will include other columns, however the format of alternating rows for the origination of the route and then the destination with the latitude and longitude for each is required. Having an Index row that keeps the right origin and destination values ordered appropriately will also help Power BI from making adjustments that you don’t want.

The Double Merge

Latitude and Longitude are not included in the flight data so we need to do a MERGE from the airport data set that we have.

NOTE: Finding lat/long values for zip codes, airport codes, city names, etc… is generally the hardest part of using ggmap and is why most of the time the use of a second reference data set is required to get this data

Perform a “Merge Queries” against the “origin” column of the flights data and the “Code” column (this is the 3 letter airport code from the airports data set)

Rename the newly created column as “Origin” and then click the “Expand” button to select the Latitude and Longitude columns ONLY to be merged.

Now repeat the above Merge steps a second time but instead of using the “origin” use the “dest” column from the flights data and merge against the same 3 digit code in the airports data. Call the new column “Dest” before expanding the Latitude and Longitude.

Once finished your dataset should look like this:

We have successfully acquired latitude and longitude which ggmap needs to plot the path of the line. However, we need the values of Lat/Long for origin and destination to be on separate rows as shown above. This is where we get to use my favorite M/PowerQuery function of “Unpivot”

Using Unpivot

To ensure later that the order of our rows are not out of sync when creating the dataframe for the R visual, add an index column via the “Index Column” button on the “Add Column” tab. I start at 1.

We need to have two columns to unpivot on to create two rows for each single row currently in the data set. I achieve this most simply by adding two custom columns via the “Custom Column” button on the “Add Column” tab. Just fill in the expression with the string “Origin” and then “Destination” for each new column as shown below

The data should now look like this:

Select both of the new custom columns (mine are called Direction 1 and Direction 2) right click and select “Unpivot Columns”

Now each row has been duplicated so that we can get the Origin and Destination latitude and longitude on separate rows.

The newly created “Attribute” column should be removed and in my example I have renamed “Value” to “Direction”.

NOTE: In this example I am using unpivot to manipulate my single row into two rows. A more meaningful use of unpivot would be if you have revenue data by month and each month is represented as a column (Jan, Feb, March, etc…) you could select all the month columns and “Unpivot” and you would now have a row for each month as the attribute and the sales amount as the value.

Conditional Columns

Once the unpivot has been completed, add two conditional columns for “Latitude” and “Longitude” via the “Conditional Column” button in the “Add Column” tab to get the values for Origin and Destination into a single column for each. Use the “Direction” column as the conditional and when it equals “Origin” select the “Origin.Latitude” column. Otherwise, select the “Dest.Latitude” column.

See below example for Latitude:

Be sure to change the type of the two circled buttons from Value to Column.

Repeat the above for Longitude.

Change the type of these new columns to decimal numbers.

Now remove the 4 columns of “Origin.Latitude”, “Origin.Longitude”, “Dest.Latitude”, “Dest.Longitude”.

The last 4 columns of the Flights data set should now look like this:

Data prep is complete. We can now close and Apply.

The data load will take a long time if you used the 2.4 GB file from Enigma.IO… Time for coffee 🙂

Creating the Viz

As we work with this data set, remember that we now have two rows representing each flight. So if you want to get any counts or summarizations, always divide by 2 in your measures.

With these 3 measures, we can start working on visualizations. First, simply a few card visuals and a matrix by origin/destination

We have 1.6 million flights in our data set. Creating an R visual to represent all of these routes will not be very productive and will probably crash your memory anyway. Let’s setup a filter by origincityname and only route the cancelled flights.

For the above visual, we first should add origincityname as a slicer and select “Atlanta, GA”. Then add a slicer for cancelled = 1.

To create the R visual, select the “R” visualizations icon

Pull in the following values from the “Flights” data

This automatically generates some R code in the R script editor for the Visual

NOTE: This is really important as we have eliminated potentially 100s of lines of R code to “prep” the data to make the data frame look like we need it to be entered into the ggmap function. This was all done via the Query Editor transformations we made above.

Now we simply add the 2 lines of R code underneath the generated R script section

NOTE: The comments make it longer than two lines, but helps describe what is happening

The geom_path function is part of the ggplot2 library and is further explained here: http://docs.ggplot2.org/current/geom_path.html

Hopefully from this example you can see that the R code is fairly minimal in filling the requirement of routing visualization.

Measures vs Columns for this R Visual

One limitation that currently exists in Power BI Desktop is that the measures we defined earlier are not really providing value to the visualization because we need to include the “Index” column to keep the dataset ordered as expected for ggmap to plot the route from the alternating rows of data.

Because the index column is required to keep the sort order, the filter context applied to the DAX measure of “Number of Cancelled Flights” will always equal 1. This does not allow us to do much “Business Intelligence” of the data set.

EARLIER function

Until this day, I am still not fully aware of why they call this function EARLIER, but what we need to do to introduce some actual “Business Intelligence” into this R visual is to create a column with the total number of cancelled flights via a given route. This “column’s data” will be repeated over and over so beware of how you utilize it. However, it will give us a great way to ONLY retrieve the data that we want.

For the “Flights” data set, add the following column to create a unique value for each Route that exists between airports:

Once that value is added, the EARLIER function can be applied to get the total number of cancellations:

The above value is repeated in every row, so don’t use it to be summarized… use it as a page level filter or a slicer to only retrieve data that meets that requirement (example: Only show routes that have more than 25 cancellations)

Make sure your slicers are cleared and your new plot should look something like this:

Now the solution is starting to become usable to gain some insights into the data. You can download my finished solution from the github repository and see that I have duplicated the “Airports” data set and have one for Origin and one for Destination that I can use as a slicer to more quickly find routes that have frequent cancellations from/to each city or state.

Conclusion

This is just an example of the many ways Power BI can be extended to help solve business problems through visualizations. One note to make is that ggmap is not yet supported by PowerBI.com. This specific R visual is only available in the desktop but many other R visuals are supported in the service as well.

And for my next article, we will see if i am brave enough to post my real opinions on the internet about Big Data and data warehousing architectures. I have lots to “rant” about but we will see if I actually post anything 🙂

Map Visual – How to deal with large datasets

 

The Power BI bubble map is very useful when plotting geography points rather than shapes or areas. I’d say that in 90% of the cases in how you should be representing your data the out of the box functionality should be fine. Occasionally however you will run into scenarios that you must plot more than a few thousand data points in which you will exceed the upper bound and get the little information icon shown below.

20160801-004

001

What is frustrating about this when it comes to maps is that it does not take a representative sample of the data and instead will just “cut off” the data points. Other visualizations will show an information icon with this message instead:

002

There are 3 techniques that I have used to deal with this issue.

  • Creating a representative sample via DAX
  • Using R script with ggmap
  • Using a location hierarchy

All of these have pros and cons.

Getting a Representative Sample via DAX

The best way to illustrate the native map behavior and how we solve for it is by using an embedded Power BI report. I am going to use some anonymized data from a logistics company that represents cell phone signal strength across one of their routes. The route in this example starts in the Chicago, IL area (United States). See the first tab in the report below.

Note: Use the “full screen” button in the lower right to expand

In the 4 visuals above, notice that the 1st and 3rd only plot a tiny portion of the latitude/longitude. This is confirmed as if you hover over the highest point the latitude is much less than the max latitude shown in the multi-line card visual in the upper right. You can also just “zoom out” on these and see that no points are plotted anywhere near Chicago which is where this route starts.

I included the 2nd visual as I did find it interesting that when I used a value in the SIZE field that may be greater than 1, it seemed to plot more evenly across the entire span of the latitude, however I still would not consider this “representative”.

The 4th visual is simply a scatter plot of the latitude/longitude. Note when hovering over the “I” icon in the upper left it states that it is a representative sample. This is what we want to achieve in the bubble map that is not done natively. In this scenario where we have a continuous latitude/longitude plot, this is a good solution.

 

Now navigate to the second page of the report. Here you will find the map visual always giving the representative sample regardless of how many dates you select in the slicer. I have solved the real business problem here by adding the cellphone signal strength as the legend across the route.

Compare the min and max latitudes in the card visual with hovering over the highest and lowest plotted point on the map. You should notice them to be consistent and the points in between are filled nicely.

“If there is an easier way to do this than what i am about to show, please share with me”

I worked through several iterations before arriving at the conclusion that using RAND() ( or RANDBETWEEN() actually ) was the magic I needed. This is the random number generator expression for those of you that don’t know. And for those of you that do know, you are probably questioning my knowledge a little bit at this point 🙂 but this was truly the only way i got this to work without having to make too many assumptions about the data.

Let’s get into the DAX.

It is important to know how you plan to slice the data in your map visual. In my case, i want to be able to select a date or range of dates to review the cellphone signal strength and how it may vary over time. We can simply get the count of rows in the table with a formula like this:

I have appended “WRONG” to the measure name because we run into an issue of CONTEXT on the map and we have to do a context transition (as described by Alberto Ferrari here). In the correct formula below, we need to include ALL rows of the table, except for the rows we are filtering out with the date slicer.

Notice on the second page of my embedded report above, i have listed a couple of the measures in my card visual that are the wrong ones. Apply a date filter and then in the map select a single point. Notice how the “Total Signals (WRONG)” changes to 1. This is because you have just applied filter context to the measure when you selected that single point on the map. We have to use this measure in getting a representative sample but it has to ignore the map’s filter context which is why in the RIGHT measure above, we have done the context transition to get ALL of the rows except for those being filtered by the date.

Now we need to apply a sample rate to the selected rows. The sample rate will be dynamic depending on how many rows are selected. We start to exceed the bounds of the map visual at around 3000 points so i play it safe below and use 2500 as my denominator. This is because the later calculation will not exactly match this and we may end up with more than 2500 rows.

The CEILING function just rounds whatever the decimal number is up to the nearest integer as i have specified “1” as the second argument.

On page 2 in the report above, you can see how the sample rate changes as the total signals increases with multiple dates selected in the slicer

As the next building block in the representative sample formula, we will pull out the random number generator to give a random number between 1 and the dynamic sample rate that was calculated above

We will use the number generated from this measure and compare it to our Sample Rate. When they are equal we will plot the point.

stopjpg

Before you call me crazy and question my judgement… yes, i know that RAND or RANDBETWEEN does not guarantee me a consistent sampling of data. If my sample rate is 100, it may take me 150 or even 200 tries before my measure above equals the sample rate. But it also may only take 5 tries to equal it as well. I look at the use of RAND() as a “better than the alternative” approach as it gives me the opportunity to get a representative sample of my data versus getting a cut off and unusable data set.

In a prior attempt, i used the MOD function and some other DAX acrobats to try to ensure a consistent representation of the data, but in each attempt i was defeated by the filter context issue of the lat/long on the map. This is where I would humbly welcome feedback from any DAX ninjas if there is a better way.

The reason that using the RANDBETWEEN() function works is that it re-evaluates for every single point on the map trying to be plotted.

Below is the measure for the representative sample. Use this in the SIZE field.

As it is possible i could have multiple rows being counted for the same lat/long position, i add some additional DAX to ensure i only have a size of 1 or BLANK()

The result below is a nice smooth representative sample without too much DAX needing to be written.

20160801-003

Pros for using this approach:

  • Easy… low difficulty level
  • Native Bing map means it is zoom-able… in my use case, it is important to be able to zoom in on the areas that have bad signal coverage
  • Great for big datasets… I believe all the DAX used above is available when doing direct query

Cons

  • Using RAND does not ensure a consistent sampling
  • May not be well suited for geographically disperse data (see the hierarchical approach below)

Using R script with ggmap

Another way to approach the sampling problem is with an R visual. We can create an R Visual that uses ggmap to plot the data points for our route.

I drag in the LATITUDE, LONGITUDE, Max_Signal_Rate, and the Total Signals measure (to avoid pre-sampling the data, reset this measure to just use the COUNTROWS function)

Here is the R script that then needs to be put in the R Script Editor

To use the above code, you will need to be sure you have installed R on your machine and that you have also installed the ggmap and ggplot2 packages.

The above code will produce a plot shown below and can still be filtered with a date slicer.

20160801-005

As i am still learning R, i didn’t take the time to assign a specific coloring scheme for the plotted points so the above just took the default.

There are a couple of obvious CONS to this approach

  1. You have to know R. It is not nearly as easy as just using native Power BI visuals and a little bit of DAX
  2. The map is not zoom-able. It is completely static which does not allow me to achieve solving my business problem which is to zoom in on the spots that my signal strength is low. For my use case, this does not work, however in another use case, the zoom may not be an issue.

A couple of PROS for this approach

  1. I can plot more data points with ggmap. It appears there is still a data limit but it is larger than the out of box map. I have not yet determined what that limit is. I believe it is more to do with Power BI R visuals however and not with the R ggmap functionality.
  2. With the ability to “code your own visual”, there is much more control and opportunities to create a custom experience. instead of using the geom_point function we could use geom_tile to see a more “blocky” heat map style plot as shown in this tutorial or even using geom_path which would allow you to do more of a point A to point B type of map if you didn’t really need to plot information along the way as shown in my example.

Using a Location Hierarchy

This to me seems like an obvious approach when dealing with location data that has a natural hierarchy such as Country->State/Province or Country->City.

So instead of having something like this

20160801-006

by using a hierarchy in the “Location” field you can right click and “drill down” on a parent geography to see the details

20160801-007

20160801-008

This is a very clean way to keep your data points under the threshold of around 3000 without having to write any DAX or R. The biggest problem with this however is it is fully dependent on the location hierarchy giving the necessary split to keep the lowest level of detail going over the threshold.

 

These are 3 different ways of dealing with too much data trying to be plotted on your maps. Again, for the representative sample pattern i showed i am interested in any thoughts of better approaches. Leave them in the comments or ping me on Twitter at @angryanalytics.