• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Saturday, September 13, 2025
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Artificial Intelligence

Every part You Must Know In regards to the New Energy BI Storage Mode

Admin by Admin
August 21, 2025
in Artificial Intelligence
0
Main dl.png
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Generalists Can Additionally Dig Deep

3 Methods to Velocity Up and Enhance Your XGBoost Fashions


The aim of this text IS NOT to offer the reply to the query: “Which one is ‘higher’ — Import or Direct Lake?” as a result of it’s unimaginable to reply, as there is no such thing as a one resolution to “rule-them-all”… Whereas Import (nonetheless) ought to be a default selection generally, there are specific situations through which you may select to take the Direct Lake path. The principle purpose of the article is to offer particulars about how Direct Lake mode works behind the scenes and shed extra gentle on varied Direct Lake ideas.

If you wish to study extra about how Import (and DirectQuery) compares to Direct Lake, and when to decide on one over the opposite, I strongly encourage you to look at the next video: https://www.youtube.com/watch?v=V4rgxmBQpk0

Now, we are able to begin…

I don’t find out about you, however after I watch motion pictures and see some breathtaking scenes, I’m all the time questioning — how did they do THIS?! What sort of tips did they pull out of their sleeves to make it work like that?

And, I’ve this sense when watching Direct Lake in motion! For these of you who might not have heard in regards to the new storage mode for Energy BI semantic fashions, or are questioning what Direct Lake and Allen Iverson have in widespread, I encourage you to start out by studying my earlier article.

The aim of this one is to demystify what occurs behind the scenes, how this “factor” truly works, and offer you a touch about some nuances to remember when working with Direct Lake semantic fashions.

Direct Lake storage mode overcomes shortcomings of each Import and DirectQuery modes — offering a efficiency much like Import mode, with out knowledge duplication and knowledge latency — as a result of the information is being retrieved immediately from delta tables through the question execution.

Seems like a dream, proper? So, let’s attempt to look at totally different ideas that allow this dream to return true…

Framing (aka Direct Lake “refresh”)

The commonest query I’m listening to today from purchasers is — how can we refresh the Direct Lake semantic mannequin? It’s a good query. Since they’ve been counting on Import mode for years, and Direct Lake guarantees an “import mode-like efficiency”… So, there needs to be the same course of in place to maintain your knowledge updated, proper?

Properly, ja-in… (What the heck is that this now, I hear you questioning😀). Germans have an ideal phrase (considered one of many, to be trustworthy) to outline one thing that may be each “Sure” and “No” (ja=YES, nein=NO). Chris Webb already wrote a nice weblog publish on the subject, so I received’t repeat issues written there (go and browse Chris’s weblog, this is likely one of the greatest assets for studying Energy BI). My concept is as an instance the method taking place within the background and emphasize some nuances that may be impacted by your selections.

However, first issues first…

Syncing the information

When you create a Lakehouse in Microsoft Material, you’ll routinely get two extra objects provisioned — SQL Analytics Endpoint for querying the information within the lakehouse (sure, you possibly can write T-SQL to READ the information from the lakehouse), and a default semantic mannequin, which incorporates all of the tables from the lakehouse. Now, what occurs when a brand new desk arrives within the lakehouse? Properly, it relies upon:)

Should you open the Settings window for the SQL Analytics Endpoint, and go to the Default Energy BI semantic mannequin property, you’ll see the next choice:

Picture by writer

This setting lets you outline what occurs when a brand new desk arrives at a lakehouse. By default, this desk WILL NOT be routinely included within the default semantic mannequin. And, that’s the primary level related for “refreshing” the information in Direct Lake mode.

At this second, I’ve 4 delta tables in my lakehouse: DimCustomer, DimDate, DimProduct, and FactOnlineSales. Since I disabled auto-sync between the lakehouse and the semantic mannequin, there are at present no tables within the default semantic mannequin!

Picture by writer

This implies I first want so as to add the information to my default semantic mannequin. As soon as I open the SQL Analytics Endpoint and select to create a brand new report, I’ll be prompted so as to add the information to the default semantic mannequin:

Picture by writer

Okay, let’s look at what occurs if a brand new desk arrives within the lakehouse? I’ve added a brand new desk in lakehouse: DimCurrency.

Picture by writer

However, after I select to create a report on prime of the default semantic mannequin, there is no such thing as a DimCurrency desk accessible:

Picture by writer

I’ve now enabled the auto-sync choice and after a couple of minutes, the DimCurrency desk appeared within the default semantic mannequin objects view:

Picture by writer

So, this sync choice lets you resolve if the brand new desk from the lakehouse will likely be routinely added to a semantic mannequin or not.

Syncing = Including new tables to a semantic mannequin

However, what occurs with the information itself? That means, if the information within the delta desk modifications, do we have to refresh a semantic mannequin, like we needed to do when utilizing Import mode to have the most recent knowledge accessible in our Energy BI experiences?

It’s the appropriate time to introduce the idea of framing. Earlier than that, let’s rapidly look at how our knowledge is saved beneath the hood. I’ve already written in regards to the Parquet file format intimately, so right here it’s simply essential to understand that our delta desk DimCustomer consists of a number of parquet information (on this case two parquet information), whereas delta_log allows versioning — monitoring of all of the modifications that occurred to DimCustomer desk.

Picture by writer

I’ve created a brilliant fundamental report to look at how framing works. The report reveals the identify and electronic mail tackle of the client Aaron Adams:

Picture by writer

I’ll now go and alter the e-mail tackle within the knowledge supply, from aaron48 to aaron048:

Picture by writer

Let’s reload the information into Material lakehouse and verify what occurred to the DimCustomer desk within the background:

Picture by writer

A brand new parquet file appeared, whereas on the identical time in delta_log, a brand new model has been created.

As soon as I am going again to my report and hit the Refresh button…

Picture by writer

This occurred as a result of my default setting for semantic mannequin refresh was configured to allow change detection within the delta desk and routinely replace the semantic mannequin:

Picture by writer

Now, what would occur if I disable this selection? Let’s verify… I’ll set the e-mail tackle again to aaron48 and reload the information within the lakehouse. First, there’s a new model of the file in delta_log, the identical as within the earlier case:

Picture by writer

And, if I question the lakehouse by way of the SQL Analytics Endpoint, you’ll see the most recent knowledge included (aaron48):

Picture by writer

However, if I am going to the report and hit Refresh… I nonetheless see aaron048!

Picture by writer

Since I disabled the automated propagation of the most recent knowledge from the lakehouse (OneLake) to the semantic mannequin, I’ve solely two choices accessible to maintain my semantic mannequin (and, consequentially, my report) intact:

  • Allow the “Preserve your Direct Lake knowledge updated” choice once more
  • Manually refresh the semantic mannequin. Once I say manually, it may be actually manually, by clicking on the Refresh now button, or by executing refresh programmatically (i.e. utilizing Material notebooks, or REST APIs) as a part of the orchestration pipeline

Why would you need to hold this selection disabled (like I did within the newest instance)? Properly, your semantic mannequin often consists of a number of tables, representing the serving layer for the tip person. And, you don’t essentially need to have knowledge within the report up to date in sequence (desk by desk), however in all probability after all the semantic mannequin is refreshed and synced with the supply knowledge.

This strategy of protecting the semantic mannequin in sync with the most recent model of the delta desk known as framing.

Picture by writer

Within the illustration above, you see information at present “framed” within the context of the semantic mannequin. As soon as the brand new file enters the lakehouse (OneLake), here’s what ought to occur with a purpose to have the most recent file included within the semantic mannequin.

The semantic mannequin should be “reframed” to incorporate the most recent knowledge. This course of has a number of implications that you have to be conscious of. First, and most essential, every time framing happens, all the information at present saved within the reminiscence (we’re speaking about cache reminiscence) is dumped out of the cache. That is of paramount significance for the subsequent idea that we’re going to talk about — transcoding.

Subsequent, there is no such thing as a “actual” knowledge refresh taking place with framing…

Not like with Import mode, the place kicking off the refresh course of will actually put the snapshot of the bodily knowledge within the semantic mannequin, framing refreshes metadata solely! So, knowledge stays within the delta desk in OneLake (no knowledge is loaded within the Direct Lake semantic mannequin), we’re solely telling our semantic mannequin: hey, there’s a new file down there, go and take it from right here when you want the information for the report… This is likely one of the key variations between the Direct Lake and Import mode

For the reason that Direct Lake “refresh” is only a metadata refresh, it’s often a low-intensive operation that shouldn’t devour an excessive amount of time and assets. Even in case you have a billion-row desk, don’t neglect — you aren’t refreshing a billion rows in your semantic mannequin — you refresh solely the knowledge about that big desk…

Transcoding — Your on-demand cache magic

High-quality, now that you understand how to sync knowledge from a lakehouse along with your semantic mannequin (syncing), and the best way to embrace the most recent “knowledge about knowledge” within the semantic mannequin (framing), it’s time to grasp what actually occurs behind the scenes as soon as you place your semantic mannequin into motion!

That is the promoting level of Direct Lake, proper? Efficiency of the Import mode, however with out copying the information. So, let’s look at the idea of Transcoding…

In plain English: transcoding represents a strategy of loading components of the delta desk (after I say components, I imply sure columns) or all the delta desk into cache reminiscence!

Let me cease right here and put the sentence above within the context of Import mode:

  • Loading knowledge into reminiscence (cache) is one thing that ensures a blazing-fast efficiency of the Import mode
  • In Import mode, in case you haven’t enabled a Giant Format Semantic Mannequin characteristic, all the semantic mannequin is saved in reminiscence (it should match reminiscence limits), whereas in Direct Lake mode, solely columns wanted by the queries are saved in reminiscence!

To place it merely: bullet level one implies that as soon as Direct Lake columns are loaded into reminiscence, that is completely the identical as Import mode (the one potential distinction often is the approach knowledge is sorted by VertiPaq vs the way it’s sorted within the delta desk)! Bullet level two implies that the cache reminiscence footprint of the Direct Lake semantic mannequin might be considerably decrease, or within the worst case, the identical, as that of the Import mode (I promise to point out you quickly). Clearly, this decrease reminiscence footprint comes with a value, and that’s the ready time for the primary load of the visible containing knowledge that must be “transcoded” on-demand from OneLake to the semantic mannequin.

Earlier than we dive into examples, you may be questioning: how does this factor work? How can it’s that knowledge saved within the delta desk might be learn by the Energy BI engine the identical approach because it was saved in Import mode?

The reply is: there’s a course of known as transcoding, which occurs on the fly when a Energy BI question requests the information. This isn’t too costly a course of, because the knowledge in Parquet information is saved very equally to the best way VertiPaq (a columnar database behind Energy BI and AAS) shops the information. On prime of it, in case your knowledge is written to delta tables utilizing the v-ordering algorithm (Microsoft’s proprietary algorithm for reshuffling and sorting the information to attain higher learn efficiency), transcoding makes the information from delta tables look precisely the identical as if it have been saved within the proprietary format of AAS.

Let me now present you ways paging works in actuality. For this instance, I’ll be utilizing a healthcare dataset offered by Greg Beaumont (MIT license. Go and go to Greg’s GitHub, it’s full of fantastic assets). The actual fact desk incorporates ca. 220 million rows, and my semantic mannequin is a well-designed star schema.

Import vs Direct Lake

The thought is the next: I’ve two an identical semantic fashions (identical knowledge, identical tables, identical relationships, and so forth.) — one is in Import mode, whereas the opposite is in Direct Lake.

Import mode on the left, Direct Lake mode on the appropriate

I’ll now open a Energy BI Desktop and join to every of those semantic fashions to create an an identical report on prime of them. I would like the Efficiency Analyzer software within the Energy BI Desktop, to seize the queries and analyze them later in DAX Studio.

I’ve created a really fundamental report web page, with just one desk visible, which reveals the full variety of information per 12 months. In each experiences, I’m ranging from a clean web page, as I need to make it possible for nothing is retrieved from the cache, so let’s evaluate the primary run of every visible:

Picture by writer

As it’s possible you’ll discover, the Import mode performs barely higher through the first run, in all probability due to the transcoding value overhead for “paging” the information for the primary time in Direct Lake mode. I’ll now create a 12 months slicer in each experiences, change between totally different years, and evaluate efficiency once more:

Picture by writer

There may be principally no distinction in efficiency (numbers have been moreover examined utilizing the Benchmark characteristic in DAX Studio)! This implies, as soon as the column from the Direct Lake semantic mannequin is paged into reminiscence, it behaves precisely the identical as within the Import mode.

Nevertheless, what occurs if we embrace the extra column within the scope? Let’s take a look at the efficiency of each experiences as soon as I put the Whole Drug Value measure within the desk visible:

Picture by writer

And, this can be a situation the place Import simply outperforms Direct Lake! Don’t neglect, in Import mode, all the semantic mannequin was loaded into reminiscence, whereas in Direct Lake, solely columns wanted by the question have been loaded in reminiscence. On this instance, since Whole Drug Value wasn’t a part of the unique question, it wasn’t loaded into reminiscence. As soon as the person included it within the report, Energy BI needed to spend a while to transcode this knowledge on the fly from OneLake to VertiPaq and web page it within the reminiscence.

Reminiscence footprint

Okay, we additionally talked about that the reminiscence footprint of the Import vs Direct Lake semantic fashions might range considerably. Let me rapidly present you what I’m speaking about. I’ll first verify the Import mode semantic mannequin particulars, utilizing VertiPaq Analyzer in DAX Studio:

Picture by writer

As you may even see, the scale of the semantic mannequin is sort of 4.3 GB! And, taking a look at the costliest columns…

Picture by writer

“Tot_Drug_Cost” and “65 or Older Whole” columns take virtually 2 GB of all the mannequin! So, in principle, even when nobody ever makes use of these columns within the report, they’ll nonetheless take their justifiable share of RAM (until you allow a Giant Semantic Mannequin choice).

I’ll now analyze the DIrect Lake semantic mannequin utilizing the identical method:

Picture by writer

Oh, wow, it’s 4x much less reminiscence footprint! Let’s rapidly verify the costliest columns within the mannequin…

Picture by writer

Let’s briefly cease right here and look at the outcomes displayed within the illustration above. The “Tot_Drug_Cst” column takes virtually all the reminiscence of this semantic mannequin — since we used it in our desk visible, it was paged into reminiscence. However, take a look at all the opposite columns, together with the “65 or Older Whole” that beforehand consumed 650 MBs in Import mode! It’s now 2.4 KBs! It’s only a metadata! So long as we don’t use this column within the report, it won’t devour any RAM.

This means, if we’re speaking about reminiscence limits in Direct Lake, we’re referring to a max reminiscence restrict per question! Provided that the question exceeds the reminiscence restrict of your Material capability SKU, it is going to fall again to Direct Question (after all, assuming that your configuration follows the default fallback conduct setup):

Desk from official MS Be taught documentation

It is a key distinction between the Import and DIrect Lake modes. Going again to our earlier instance, my Direct Lake report would work simply nice with the bottom F SKU (F2).

“You’re scorching then you definately’re chilly… You’re in then you definately’re out…”

There’s a well-known tune by Katy Perry “Sizzling N Chilly”, the place the chorus says: “You’re scorching then you definately’re chilly… You’re in then you definately’re out…” This completely summarizes how columns are being handled in Direct Lake mode! The final idea that I need to introduce to you is the column “temperature”.

This idea is of paramount significance when working with Direct Lake mode, as a result of primarily based on the column temperature, the engine decides which column(s) keep in reminiscence and that are kicked out again to OneLake.

The extra the column is queried, the upper its temperature is! The upper the temperature of the column, the larger chances are high that it stays in reminiscence.

Marc Lelijveld already wrote a nice article on the subject, so I received’t repeat all the small print that Marc completely defined. Right here, I simply need to present you the best way to verify the temperature of particular columns of your Direct Lake semantic mannequin, and share some ideas and tips on the best way to hold the “hearth” burning:)

SELECT DIMENSION_NAME
, COLUMN_ID
, DICTIONARY_SIZE
, DICTIONARY_TEMPERATURE
, DICTIONARY_LAST_ACCESSED
FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMNS
ORDER BY DICTIONARY_TEMPERATURE DESC

The above question towards the DMV Discover_Storage_Table_Columns may give you a fast trace of how the idea of “Sizzling N Chilly” works in Direct Lake:

Picture by writer

As it’s possible you’ll discover, the engine retains relationship columns’ dictionaries “heat”, due to the filter propagation. There are additionally columns that we utilized in our desk visible: 12 months, Tot Drug Cst and Tot Clms. If I don’t do something with my report, the temperature will slowly lower over time. However, let’s carry out some actions throughout the report and verify the temperature once more:

Picture by writer

I’ve added the Whole Claims measure (primarily based on the Tot Clms column) and adjusted the 12 months on the slicer. Let’s check out the temperature now:

Picture by writer

Oh, wow, these three columns have a temperature 10x larger than the columns not used within the report. This manner, the engine ensures that probably the most continuously used columns will keep in cache reminiscence, in order that the report efficiency would be the very best for the tip person.

Now, the honest query can be: what occurs as soon as all my finish customers go house at 5 PM, and nobody touches Direct Lake semantic fashions till the subsequent morning?

Properly, the primary person must “sacrifice” for all of the others and wait just a little bit longer for the primary run, after which everybody can profit from having “heat” columns prepared within the cache. However, what if the primary person is your supervisor or a CEO?! No bueno:)

I’ve excellent news — there’s a trick to pre-warm the cache, by loading probably the most continuously used columns upfront, as quickly as your knowledge is refreshed in OneLake. My pal Sandeep Pawar wrote a step-by-step tutorial on the best way to do it (Semantic Hyperlink to the rescue), and it is best to positively contemplate implementing this method if you wish to keep away from a foul expertise for the primary person.

Conclusion

Direct Lake is known as a groundbreaking characteristic launched with Microsoft Material. Nevertheless, since this can be a brand-new resolution, it depends on an entire new world of ideas. On this article, we coated a few of them that I contemplate a very powerful.

To wrap up, since I’m a visible individual, I ready an illustration of all of the ideas we coated:

Picture by writer

Thanks for studying!

Tags: ModePowerStorage

Related Posts

Ida.png
Artificial Intelligence

Generalists Can Additionally Dig Deep

September 13, 2025
Mlm speed up improve xgboost models 1024x683.png
Artificial Intelligence

3 Methods to Velocity Up and Enhance Your XGBoost Fashions

September 13, 2025
1 m5pq1ptepkzgsm4uktp8q.png
Artificial Intelligence

Docling: The Doc Alchemist | In direction of Knowledge Science

September 12, 2025
Mlm ipc small llms future agentic ai 1024x683.png
Artificial Intelligence

Small Language Fashions are the Way forward for Agentic AI

September 12, 2025
Untitled 2.png
Artificial Intelligence

Why Context Is the New Forex in AI: From RAG to Context Engineering

September 12, 2025
Mlm ipc gentle introduction batch normalization 1024x683.png
Artificial Intelligence

A Light Introduction to Batch Normalization

September 11, 2025
Next Post
Shib cb 28.jpg

2 Causes Why the Shiba Inu (SHIB) Worth May Rally Quickly

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025
Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
1da3lz S3h Cujupuolbtvw.png

Scaling Statistics: Incremental Customary Deviation in SQL with dbt | by Yuval Gorchover | Jan, 2025

January 2, 2025
0khns0 Djocjfzxyr.jpeg

Constructing Data Graphs with LLM Graph Transformer | by Tomaz Bratanic | Nov, 2024

November 5, 2024
How To Maintain Data Quality In The Supply Chain Feature.jpg

Find out how to Preserve Knowledge High quality within the Provide Chain

September 8, 2024

EDITOR'S PICK

Industry Perspectives Shutterstock 1127578655 Special.jpg

AI’s Enterprise Worth Is dependent upon Industrial Brokers – Introducing Cognite’s New Information

September 24, 2024
Gamestop Considers Bitcoin Investment Share Surges.webp.webp

GameStop Considers Bitcoin Funding; Share Surges

February 14, 2025
Shutterstock India Ibm.jpg

IBM AI merely less than the job of changing workers • The Register

September 24, 2024
0196f794 b010 7eb4 b758 10eeeef0b4ae.jpeg

FIFA faucets Avalanche to launch devoted blockchain for NFT platform

May 22, 2025

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • Grasp Knowledge Administration: Constructing Stronger, Resilient Provide Chains
  • Generalists Can Additionally Dig Deep
  • If we use AI to do our work – what’s our job, then?
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?