A Bit of Intro If I recall correctly, I completed the first version of this data architecture diagram in 2012 when we used terms like "road map" and "blueprint" Back then, along with different terms, we were also using traditional SSIS, SSAS-MultiD and SSRS tools. Now we live in the world of cloud everything, although we are still driving from SRC-to-DST (source to destination). I'm up for whatever terminology you want to use, but can we agree that we are surely on a different highway? For my classical BI Blueprint, click here, but to see an Azure road map for BI, please take a look below. Disclaimer: I create a different diagram for every engagement, so think of this as a suggestion, not a mold. Azure Data Architecture BI Talking Points:
BI Advice from the University of Hard Knocks:
Conclusion of the Matter: I am not explaining every column in the data architecture because the columns in the above diagram are not applicable to everyone. For example, almost everyone needs a semantic layer, but not everyone needs a logical data store for operational reporting. Column #5 can be done in Spark as well as Data Bricks; instead of my telling you what the best solution is, let's talk about it. For every column there is a good, better and best solution, and good heavens (!) not everyone needs a thirteen point data architecture! All things in moderation, right? I am asking, if you have taken the time to read this, please start planning before you start building! Opening Power BI and mashing up data from three different sources is generally not a scalable solution. Get started with a data architecture diagram and build a better BI house!
5 Comments
I have a PPTX slide that I use when speaking about data modeling for BI. (You can find my Pragmatic Works webinar here.) The slide is OLTP vs OLAP and is an 10K foot view of an actual denormalized ERD. I'm sharing the above slide for those that are new to denormalization, but I think more can be said about how to handle dimensions that "snowflake" or daisy chain to each other. You can see this happen in AdventureWorks between the Product, ProductSubCategory and ProductCategory tables. When life is simple and all fact tables relate to Product on ProductKey, denormalization is easy to model. Option #1: Combine the parent and child tables into a single subject area dimension. In the above solution, all three product tables were joined together into a single DimProductDenormalized. The new table contains columns and keys from all three original tables. This works well until a second fact table does not have a ProductKey, but only a ProductSubcategorykey. Now we are in a bit of a fix. 1. SSAS multidimensional cubes are designed to effectively and efficiently handle this exact scenario through attribute and cube relationships. 2. SSAS tabular models will require a second dimension at the higher, product category, grain. 3. The SQL database does not support a relationship between the denormalized DIM and the FactSnapshot and in fact will throw a "The columns in table ‘DimProductDenormalized’ do not match an existing primary key or UNIQUE constraint" error. For me, this is not an issue as I only keep SQL-defined table relationships in a data warehouse for the first year as a second measure of protection for the ETL which should be enforcing referential integrity anyway. 4. By combining small dimensions into a single subject area dimension, dimension count in the semantic layer has decreased and natural hierarchies are now available. Key Takeaway: The above scenario only works well for small code + description tables that can be combined to form a subject area dimension. Option #2: Pull the individual dimension keys into the fact table whereby removing the dimension snowflake and creating a star schema around each fact table. 1. Snowflaked relationships are no longer used although they can still exist on disc. 2. Fact tables of different grains both have a true star schema 3. Dimension count in SSAS has increased. 4. "Subject area dimension" advantage of option #1 is lost. Leaving the Land of AdventureWorks How might we implement these ideas in a more complex scenario? What happens when each snowflaked dimension is already a subject area dimension and contains ten, twenty or more dimension attributes? Please allow me to jump over to Visio now and bring in a conceptual diagram. 1. Multidimensional cubes handle 3NF through dimension design and referenced cube relationships. All is well. 2. Tabular models can consume 3NF by default design. 3. SSAS, SSRS and Power BI can all handle this ERD effectively. Changing this data model is not a requirement for a data warehouse design. In fact, this is what I think of as a Bill Inmon, the official father of data warehousing, design. 4. If you are familiar with my BI Blueprint, you will usually find this data warehouse design in column 5. 5. This data model is NOT optimized for reporting, but is CAN WORK just fine. Significant Issue: Type 2 SCD (slowly changing dimensions) can explode row counts if perpetuated down all referenced relationships. For example, when type2 DimCostCenter has a change and inserts a new row, DimLineOfService has to react and insert a new row as does DimCustomer. Regardless of 2NF, 3NF or worst normal form, type 2 data warehouse model with layers of parents and grandparent dimension tables will have this problem. This needs its own blog post. Staying focused on snowflaking dimensions ... Option #2 solution shown with larger dimensions that cannot be combined. 1. Just like the AdventureWorks star schema, this is the same concept and gives exceptional query performance because this design is optimized for reporting. 2. The dim-to-dim relationships still exist, although not pictured, and are used only for ETL. 3. Very Kimball-ish FK (foreign key) heavy fact tables 4. Key Concept: Parent and grandparent dimension FKs are brought into the fact table -- including the many role playing dimension keys that may exist in one or more dimension layers FactTable,CustomerIndustry1Key FactTable.CustomerIndustry2Key FactTable.CustomerIndustry3Key 5. Be sure to prefix your role playing dimension keys or the cost center associated with the transaction will get confused for the default cost center associated with the customer. 6. In my BI Blueprint, you will find this data warehouse design in column 7. 7. Type 2 SCD challenges still exist 8. This is a good idea (and my personal preference) but everyone IS NOT DOING IT and there are very effective 3NF data warehouses that function daily. For a company that has an extraordinary amount of snowflaking dimensions or strict (or unknown) type 2 requirements, this star schema may become a disadvantage. Situation: Many CRM data source use varchar() GUID-looking values for primary keys. This blog post applies to any source system for a reporting and analytics project that uses text/string/character values to join transactional tables. Below are example PKs from the SalesForce Opportunity table. If you have the privilege of a data warehouse, the extract, transform and load (ETL) process often, as best practice, replaces source system PKs with data warehouse identity seed integer values. However, with the trends in data mashups (Excel Power Query and Power BI Query Editor), this may not be happening. Also, some PKs, like the ones pictured above, are often brought forward into a data warehouse as a secondary "business key" and users are pulling them into their report data sources for drill-down / source system lookup capabilities. Problem: String values do not compress as well as integer values, so when using these varchar() PKs in multidimensional cubes, tabular models, Excel Power Pivot and Power BI (PBI) Desktop, file or memory sizes increase exponentially. As of January 2017, PBI in memory files have a maximum file size of 250MB. This can be highly problematic as explained by my Pragmatic Works colleague, Rachael Martino, in her SQL Saturday presentation Tips and Techniques for Power BI. (You can find a corresponding blog post from Rachael here.) With her permission, I have borrowed the following screen print which shows the problem and resolution result clearly. Summary Resolution: Assign a unique integer value to each varchar() primary key value. This may be easier said then done, but look at the result above. On the left is memory consumption "Before" by a SalesForce varchar() PK. When an integer value was substituted "After", memory size dropped from 25,563.63KB to 0.12KB. Resolution Illustrated: For the next screen print I totally cheated and used the t-sql ROW_NUMBER() and RANK() functions to illustrate my point and assign a unique integer to each varchar() value. However, there are at least three potential problem here: 1. NewAccountID and NewRecordTypeID share the same integer value. This may be okay -- it depends how your ETL is written. 2. If you are working in Excel Power Pivot, SSAS data source or Power BI query editor, you do not have the ETL capabilities that will push these same integer values into multiple child tables. 3. If you are working in Azure DW, as of January 2017 Azure DW did not have auto-incrementing identity seed capabilities, but that is a semi-related topic for another day. Creative Problem Solving, Please: When I find myself in a bit of a fix like this, the answer is always the same: What is best for reporting and analytics (R&A)? ETL (or ELT) is not the spoke of my BI Wheel. In fact, data transformation frequently writes a check payable to Father Time to make a better R&A experience. This is another one of those instances. You should handle this in your source-to-data warehouse data integration step.
As a last resort, you can play ROW_NUMBER() and RANK() games inside your data source views. You can also continue to use these varchar() PK values for table relationships inside of SSAS tabular models, but be sure to 'Hide from Client Tools' so they don't end up being pulled into PBI memory or used as slicers or column values. If you are using tabular models and include these columns in your design, there is no way around paying the memory price in your SSAS processed model. Multidimensional cubes will throw a warning for bad cardinality of a dimension attribute, but if you do not place them inside any *.dim, and only use them for relationships in your DSV, you should be okay. There really is no happy ending here if you cannot get rid of these things from within medium to large sized data sets. Let's remember, each MS BI tool is designed for a specific purpose. SSRS is a reporting tool although it can also provide dashboards. Power BI Desktop is designed for analysis of aggregated data -- not paginated granular reporting. Consequently, if we use each MS BI tool for what it does best, a SSAS Action or Power BI link to a granular SSRS report can be a good solution here. "Simply" pass a set of input parameters to SSRS and present to the user only the varchar() values needed. The intent of this blog post isn't to rewrite the Kimball Group Reader. Below is just my simple summary of what constitutes a subject area dimension vs a fact (a group of measures).
To create data models for business intelligence, you first need to understand your BI Blueprint, then it is pretty critical to truly understand the difference between reporting and analytics (R&A) data models. Modeling data for R&A happens in pipes #5, #7 and sometimes (but not optimally) in the DSVs (data source views) found in pipe #8. Keeping in mind that tables arranged in a circle does not a star schema make, below is a slide that articulates in part the difference between the two. Think of reporting as a pile of Tinker Toys -- you SELECT tables and JOIN...JOIN...JOIN to a bunch more. Analytics is about flattened "denormalized" data arranged into subject area dimensions and measure groups, preferably with pre-processed totals, stored like Rubik's Cube. If you are serious about data modeling for business intelligence, dig deep into the following concepts each which are worthy of individual blog posts.
When I get my blogging juices on, I'd like to post a bit about each one. In the interim, you can contact me in About. Inserted below is a slide I use when talking about data modeling for MS BI. (If you have brought me into your company for MS BI mentoring or training, you already have a version. :-) ). I am posting it here because if you are planning a BI project, you need your own version of one of these! Why?
Please allow me to encourage you -- open Visio and get blueprinting!! For a deeper dive into a BI blueprint for your company, drop me a note under the About section of this site. What purpose is driving your BI project? (I am talking about true OLAP in this blog post, not OLTP.) When you have an ETL or hardware choice to make, you make your decision based on what is best for what? Please allow me to suggest to you that a BI project should have one (1) central purpose: reporting and analytics (R&A). Period. The end. If this is true, (and I ask this with kindness), how then can hardware, network, your data model, data integration and data visualization choices be made without R&A clearly defined? For instance, I have often thought often that the most critical wheel spoke of a successful BI implementation is the data model. "Build it and they will come" is not a really good catch phrase for a BI project because you may end up building a football stadium when your users intended to play basketball. You can retrofit your football field, but wouldn't it have been a lot better (and cheaper) if you had built a basketball court to start? Possible indicators that a BI model was not written with R&A in mind:
Possible indicators that hardware was not specified with R&A in mind:
I am sure you can come up with actual indicators for your own projects, but the point is this: EVERYTHING we do in our BI projects should take into consideration R&A. Pick a wheel spoke -- any wheel spoke -- when you draw your blueprint for that spoke, UNDERSTAND your complex business logic and KNOW your reporting requirements. What do you think? Here are a few talking points for your BI team:
How do all of your choices support R&A? When you build a house, you start from the foundation and work up. When you build a BI solution, it is logical to start from the foundation and build up as well. However, what I see often is someone working on the house roof (reporting) before there is a foundation (data model, integration, security ...). We all understand that some houses are pre-fabricated and the individual pieces are build independent of each other and then somehow come together in one miraculous final push to production. However, in my opinion, that is not the "industry standard" and as a BI consultant, I will rarely recommend any other BI build method other than "from the ground up". I consider the house roof, reporting and analytics, to be the "fun" part of every BI project because it is the most visible. (I have observed that those who get to write reports and create dashboards often progress quickly to hero status.) However, it is the responsibility of each team member of a BI project to produce a product that has these characteristics: 1. Accurate (is dependable and truthful) 2. Scalable (can grow and change in step with business fluctuations) 3. Discoverable (you can find things you want, such as metadata, measures and attributes) Talking Points:
Self-Service BI -- PPS (performance point services), Excel, Report Builder Automated Reporting -- SSRS, Excel Source code control -- TFS (team foundation server), corporate file store Documentation -- in-line code, TFS, file store, DMVs, Visio, Word documents |
Microsoft Data & AI | Modeling for BI |