Episode 319 – Fabric January 2026 Feature Summary
Jason and John are back to tackle the Microsoft Fabric January 2026 Feature Summary. And what a summary it is. From Microsoft’s acquisition of Osmos to bring agentic AI data engineering to Fabric, to Git branch commit improvements, expanded OneLake security for mirrored databases, and significant Real-Time Intelligence enhancements, this episode covers the platform updates that will shape how you work with Fabric in 2026.
Microsoft Acquires Osmos: Agentic AI for Data Engineering
The first major announcement: Microsoft has acquired Osmos, an agentic AI data engineering platform.
“We don’t have a whole lot of detail on this,” Jason admitted. “We did no digging.” The blog post from Bogdan Crivat was “two paragraphs long and really didn’t give any detail other than it’s designed to help with agentic AI in the platform.”
What Microsoft did share: Osmos focuses on turning raw data into AI-ready assets faster. “They obviously saw something in their code that they really liked and wanted to integrate it,” Jason noted.
The promise? “It’ll make AI-ready assets faster. So that would be really cool if in fact this is focused on when you ask a question, it creates the output for you in a faster and better way. I’d love to see that.”
John’s take on integration: “I imagine it’ll get rolled into Copilot in Fabric. We’ll see… that would be logical.”
Jason couldn’t resist: “Since they had to pay for it, I imagine we’re going to as well.”
Fabric Platform: AI and Catalog Improvements
Auto-Summary for Semantic Models
Continuing the AI theme, there’s now an auto-summary feature for semantic models. “It’s basically in the UI,” John explained. “If you want to know what’s going on with a particular semantic model, you can hit a button and it’s going to analyze the semantic model, give you an answer to that question.”
John’s practical recommendation: “I would recommend copying and pasting into the description of the semantic model so other people don’t have to do this over and over again.”
The feature is also available programmatically. “There’s lots of different ways to achieve this. This is bringing it up to the UI.”
Parent-Child Hierarchy in OneLake Catalog
The OneLake catalog now displays parent-child hierarchies, visualizing structure more clearly.
“For example, Lakehouse appears with its auto-generated SQL analytics endpoint and an Event House appears with its related KQL databases,” Jason explained.
John summarized it as a UI change: “A hierarchical view as opposed to just being a flat view that you have to navigate on your own. So that makes total sense.”
Jason had seen Kim Manis post about this on LinkedIn about a month earlier. “I went, ‘Ooh, that’s nice. I’m looking forward to getting that.’ This just makes it a lot easier to figure out where stuff is. I’m a lineage view fan and this is a lineage view-like feature.”
Variable Library: Item Reference Type (Preview)
The Variable Library got a significant enhancement with the new Item Reference variable type.
John explained: “You can basically set a value, a text value or a numeric value, but now you can set something called an item reference. So if you’ve got a thing—a database, a dataflow, whatever the item may be—you can reference that and substitute the reference to that item out in your variable library.”
“It’s essentially a new variable type that you can leverage. When you think about how a variable library may be used—’I want to run this workflow’—well, you can substitute out different workflows via these variables.”
Jason’s take: “This looks nice. I haven’t messed with this yet. This is one that I’m interested to play with and see how it’s implemented. It’s in preview, by the way… and preview features are ones that are harder to turn on in some client environments, but I’m looking forward to trying this out. This does seem like it will make life a lot easier.”
John noted the maturity progression: “Everything in the variable library update here kind of pertains to this variable reference, but they’ve got some good experience in how you can select it across the board and all supported items. It’s getting better and better is what it boils down to… the variable library as it’s maturing.”
Git Integration: Enterprise and Branch Improvements
For John, this section was his “bag of tricks.” He dove into the Git improvements with enthusiasm.
GitHub Enterprise Support Expansion
“There were some GitHub Enterprise instances where this wouldn’t work. Now it does,” John explained. “If you’ve got specific residency requirements, there’s something specifically about your GitHub Enterprise—now it’s enabled, which is really cool.”
The broader context: “If you turn on version control and the ability to connect to DevOps and GitHub, now you can supply an access token or an access token some way and have your workspace essentially replicated or at least the definitions of all the items replicated into Git.”
Why this matters: “This is an incredibly useful feature, not only for the ability to interrogate the structures of source items—there’s no real way in the UI to do that—but you can now tie things into high-end tools like GitHub and GitHub Enterprise with Copilot to do AI on this stuff and make your changes using AI. I’ve done a fair bit of that and it’s really quite powerful.”
Commit to Standalone Branch
This was the feature that really excited both hosts.
John explained the old workflow: “When you tie your workspace to a repository, you typically tie it to a main branch, but if you want to do some work, you might want to work on a different branch. The way you would do that in the past is go set up another branch and then go and change the branch that that workspace is tied to.”
The new way: “Now when you perform a commit, you can just say, ‘Hey, I want to commit this into a new branch’ from the UI within Fabric. That makes the process of doing that a lot more seamless, and it makes it a lot more obvious when someone maybe should do that as opposed to committing to main.”
Jason initially downplayed this one but quickly changed his tune: “There’s a decent amount of stuff here. Maybe we don’t need to go into as much detail on some of these things. And this was the one I picked and I was wrong for picking it.”
“This is really cool because quite frequently I’ve got developers who are working on something. It’s like, ‘We need to go down the rabbit hole on this, and I don’t know that this is going to be the right thing, so let’s commit to a new branch.’ Or ‘Let’s go off and branch this completely separately.’ As opposed to ‘Now I’m ready to actually commit. I’ve done some work.’ It’s like, ‘Huh, how do I back out and go…?’ This is a really nice way of dealing with it.”
Python SDK for Fabric REST API
“There is a Python SDK for the Microsoft Fabric REST API,” John announced. “Basically there’s now a Python library that encapsulates all of the Fabric API—the REST API—that you can call from Python.”
The blog provides details on how to enable this, start working with it, and includes lots of examples.
John’s assessment: “Python’s a fairly common language out there. I think it’s fair to say I don’t think I’m going too far.”
“This makes it a whole lot easier to work with the Fabric REST API, which itself is undergoing lots of frequent changes and that’s all a very good thing.”
Jason saw the bigger picture: “The expansion of that API is important. And now the fact that you can talk directly to it from Python, that’s huge because a lot of it was, ‘Oh, I need some C# in order to be able to go off and do…’ Now if I can just take my Python developers, man, the wheels are churning, John. That’s a thinker. That’s a really good one.”
OneLake and OneLake Security: Expanding the Foundation
“A lot of the OneLake stuff lately is around OneLake security,” John noted. “I know we want to talk a little more about some stuff in an upcoming episode.”
Jason confirmed: “Yeah, there’s a white paper that got published this week that we’re going to digest a little bit more. We’re not ready to talk about it in detail because we want to get into it… But man, it’s a nice detailed white paper. I’m looking forward to talking about it.”
Granular APIs for OneLake Security (Preview)
John admitted he hadn’t worked with these yet, “but from what I understand in the past it was just one API and you had to form it. It was fairly complicated to work with that to accomplish these goals.”
Now: “We’ve got some dedicated endpoints to get, post, create and delete access rules within OneLake security. So that’s good. I call this a maturity feature.”
Mirrored Items Support OneLake Security
“A big one from a capability standpoint,” according to John: “Mirrored items now have support for OneLake security. So if you’re mirroring out data from SQL or Cosmos or whatever the case may be, you can apply OneLake security to that mirrored data and have that working properly. And I think that’s probably a big deal.”
John had a question about Event House: “One thing I don’t know and I need to find out is if this applies—and I suspect it will because I think it’s fundamentally the same mechanism—to Event House’s OneLake availability feature, which is really this thing. If you don’t turn that on with Event House, that data is not available in OneLake. You turn it on and it… that smells and feels like mirroring to me, it’s just not called that.”
Jason raised an important gap: “The one thing I feel like that is still missing… as we’re dealing with these things, especially mirrored databases or mirror data sources, there’s already a security construct in place in the source, right?”
John acknowledged it: “Oh, there’s the can of worms. Yep.”
Jason’s wish: “What I would love to see is: read the security construct from the source and provide the opportunity to implement that and copy-paste into OneLake security. I don’t think it’s there. I haven’t heard anything about it. Have you seen anything?”
John wasn’t sure: “I gave that white paper a cursory overview and it might be in there. I need to dig into that to know though… I have not had the time to.”
Jason’s perspective: “We got to dig into that. That’s the one thing that I think especially for mirrored—when you’re creating new Fabric databases, living, when you’re dealing and pulling in data from other sources or whatever in semantic models and notebooks and things like that, it’s different. But if I’m mirroring, I should respect what the source was from a security perspective.”
“I have to imagine, we know the folks that are behind this stuff, they’re really smart. This shouldn’t be news. I’m just curious if it’s coming, if they’re looking at it. So I’m looking forward to looking through that white paper a little bit more in detail. If it’s not there, it’s definitely something that should get out on ideas.fabric.microsoft.com as a big thing.”
OneLake Diagnostic Immutable Logs
“It’s a compliance feature,” John explained succinctly. “I want to be able to demonstrate that… basically I want to be able to prevent anyone from ever being able to tamper with my logs so I could prove to an auditor down the road that this happened with my data. That makes sense. It’s there. I don’t think there’s that much more to say about it.”
Jason’s reaction: “My gaping maw was more about the fact that this wasn’t there before.”
John confirmed: “Well, you could edit the files, right? So this is making them non-editable.”
Data Engineering: Concurrency, Connections, and VS Code
Several data engineering updates landed in January, though Jason and John kept the coverage concise.
High Concurrency Mode for Lakehouse Operations: “It’s exactly what it says,” John noted. “We’re allowing more users to do more things at the same time with a Lakehouse without having to fall back to alternate or back-off strategies. So it’s a performance issue.”
Fabric Connection Inside Notebook (Preview): Jason thought this had been around for a while. John clarified: “It may have been here for a little while in preview, but essentially it’s the ability to add in connections—additional connections—to any given notebook and treat them as if they’re the home Lakehouse or the home database for any given notebook.”
Jason saw the benefit: “We’re now getting a view of it… I’m used to doing this inline in the notebook as opposed to now they’re giving us a way to set these things up in what feels like environmentally almost for the notebook… in the notebook’s own brain, it’s saying, ‘Hey, put your connections here.'”
John emphasized the connection concept: “That whole connection concept, this ubiquitous concept of all of the workloads using the same connection model, we’re seeing this getting adopted across the board. If they could just make maintenance of connections themselves a little easier… just go into that admin button that you get in your UI and have a look at connections and enjoy the experience of maintaining those. Not the easiest.”
Data Engineering VS Code Extension: You can now open and edit remote notebooks within Visual Studio Code locally. “I do love Visual Studio Code for an awful lot of this stuff,” John said. “So the more of that you can bring to a code-like environment, the more appealing this stuff is going to be to developers, quite frankly.”
Materialized Lake Views: Now support CREATE OR REPLACE, making schema changes easier. “You can just create run a create or replace command as opposed to having to drop it, recreate it, all of that stuff,” John explained.
Lineage Enhancements for Materialized Lake Views: Better visualization of what’s downstream. “Just like lineage views in Power BI,” Jason noted.
Data Warehouse: Statistics and Result Set Caching
John prefaced this section: “An area which I don’t have that much hands-on, so you’ll have to excuse the lack of detail here.”
The updates included:
- Proactive Statistics Refresh: “From what I understand, that basically has required a manual process in the past to get the statistics on a table updated. Now it doesn’t.”
- Incremental Statistics Refresh: Jason noted this seemed to lead into result set caching
- Result Set Caching (Generally Available): “Helps rebuild and get you faster execution with all the statistics updates here,” Jason observed
- MERGE Command (GA): “Really just a GA announcement,” John said. “Bringing the warehouse SQL set a lot closer to what everyone would think of as the standard T-SQL transactions.” MERGE brings together INSERT, UPDATE, and DELETE into a single statement.
Real-Time Intelligence: Out of the Dark Woods
“We’re out of the dark woods of the space that we’re not as familiar with,” Jason announced. “We’re coming to the top of the hill and look at the landscape below you. It’s Real-Time Intelligence.”
John’s response: “Love my real-time.”
The updates included:
MQTT Connector: Message Queuing Telemetry Transport is now supported as an input source to Event Stream. “I’ve had occasion to use it, but it’s very much a standard. So it’s good that it’s here,” John said.
Real-Time Weather Connector (GA): “If you need weather in your data, you can leverage Event Stream to do that. It’s pretty cool. I quite like it still,” John noted. Jason added a warning: “It’s a little chatty though, so just recognize you’re going to be paying for that.” John confirmed there’s a tenant-level admin switch to turn it off.
Accelerated OneLake Shortcuts: John dove deep on this one, explaining how shortcuts now support acceleration based on datetime columns. “Event House is really highly optimized around datetime features… it’s just blazingly fast.”
He explained accelerated shortcuts: “If you’ve got data in your Lakehouse and you want to use Event House to perform queries on it, you can turn on acceleration. It’s basically an external table as far as Event House is concerned… When you turn on acceleration, it’s very close to as fast as data that’s actually resident in Event House natively.”
The ultimate goal? “I have got to think that the ultimate goal here is to work with Lakehouse data natively with Event House. That would be something if we could do that.”
Simplified KQL Syntax: Instead of using “external table” syntax, you can now treat accelerated shortcuts as internal tables. John wondered: “I’m curious as to what other things that means, what other scenarios light up? Because external tables were always constrained to a subset of features. I wonder if there’s fewer constraints on external data as a result of this, or if it’s just a cosmetic thing translating behind the scenes.”
Copilot Support for Querying Shortcuts: “Copilot didn’t support those external tables—those accelerated tables—and now it does,” John explained. “And I imagine changing that syntax helped with that significantly.”
Data Factory: Expanding Incremental Copy Support
“Data Factory, which really is just a bunch more connectors are now supported for the incremental copy feature of the copy job,” John summarized.
The new connectors include: BigQuery, Google Cloud Storage, DB2, BC Fabric Lakehouse folder, Azure Files, SharePoint List, Amazon RDS for SQL Server, Amazon RDS for Oracle, and Azure Data Explorer (Kusto).
Jason was particularly interested in one: “SharePoint Lists with incremental copy is a real interesting one, John. I need to play with this a little bit. I’m going to be curious to play with this. We’ve got some stuff that we’re doing. So that’s definitely one I’m going to be playing with here soon.”
The Wrap-Up: Blame It on Formatting
As they closed, John noted: “That very abruptly takes us to the end of the blog post.”
Jason acknowledged some confusion during the recording: “We had a little confusion in some of these spots today. I think we’re going to blame it on formatting for the blog.”
John agreed: “I think we are.”
Jason’s conclusion: “Certainly couldn’t be us.”
They ended with an exciting note: “The next recording that we do is likely going to be in person, so that’ll be fun.”
The Bottom Line
The January 2026 Fabric update signals Microsoft’s strategic priorities:
- Agentic AI integration: The Osmos acquisition shows Microsoft’s commitment to autonomous data engineering
- Developer experience: Git branch commits, Python SDK, and VS Code extensions make Fabric more developer-friendly
- OneLake security maturity: Extending to mirrored items and providing granular APIs shows the foundation is solidifying
- Real-Time Intelligence polish: Simplified syntax, Copilot support, and performance optimizations continue refining RTI
- Variable Library evolution: Item Reference type enables more flexible, secure configurations
- Enterprise readiness: Immutable logs, expanded Git Enterprise support, and connection management improvements
Jason and John’s candid assessment—including admitting when they haven’t had hands-on experience with features—provides valuable context for which updates matter most to working data professionals.
Links
Microsoft Fabric Blog Posts:
Previous Episodes:
- Episode 318 – Power BI January 2026 Feature Update
- Episode 317 – January 2026 News Catch-Up
- Episode 316 – 2024 Recap and 2026 Predictions
Subscribe: SoundCloud | iTunes | Spotify | TuneIn | Amazon Music

