Episode 306 – Fabric September 2025 Part 1: The Multitasking UI Revolution, Variable Libraries GA & User-Defined Functions

Recording under the weather but energized by his son’s 30-second cross-country PR, Jason joined John to tackle the first half of Microsoft Fabric’s September 2025 feature summary—a release so massive it required splitting across episodes. Between celebrating FabCon Vienna’s 4,000 attendees and mourning SharePoint list visualization’s long-overdue death, they unpacked the buried-lead UI transformation, variable libraries reaching GA, and why materialized lake views need scheduled refreshes.

FabCon Vienna & The Blog Post Strategy Shift

FabCon Vienna drew over 4,000 attendees despite opening day train disruptions from a municipal fire. Jason praised Microsoft’s restraint: “I woke up at 3 in the morning… there’s not a thousand blog posts. That’s amazing.”

Instead of overwhelming developers with separate announcements, most news consolidated into the monthly feature summary. “Kudos to the team on that… we were able to read through them and get all of the information.”

The keynote featured security-focused Day 2 content from Casper Young and Kimes, reinforcing that “your data is the holy grail… it’s important to be able to do these things.”

SharePoint List Visualization: Good Riddance

Before diving into fabric updates, the duo celebrated a deprecation: Power BI integration with SharePoint lists and libraries—a feature Jason has “railed against” since release.

“The implementation of this has been garbage since day one, and I’m so glad to see that it’s ending up in the bin where it belongs.”

The core problem: users couldn’t control report refreshes. “I expect to go in and be able to hit refresh and see those changes reflected… and it doesn’t work that way.”

John acknowledged the concept had merit but execution failed: “Not the least of which you couldn’t control how the reports were refreshed, and I think that’s frankly the biggest problem.”

Jason’s verdict: “I’m just glad that it’s over. It’s been a blight on the history and they made the right decision.”

The deprecation post initially appeared, vanished, then reappeared—prompting community jokes about “deprecating a deprecation blog.”

OneLake Catalog: Govern Tab Goes GA

The govern tab in OneLake Catalog reached general availability, providing high-level governance dashboards with decomposition trees showing domain structures.

Jason tempered expectations: “By having govern tab, I should be good to go… there’s no more work that I have to do. Any type of governance, right?”

John framed it as foundation: “This is going to be the logical launching place to add governance capabilities as we move forward. Things like OneLake security… this should be a good place to do your OneLake security.”

Data domains remain underadopted in Jason’s experience—partly due to naming confusion with traditional on-prem domains—but the govern tab provides necessary visibility for organizations ready to implement domain-based governance structures.

Complementary updates included:

  • Public API for domains (GA): Programmatic domain management, workspace assignment by capacity/ID/principals, bulk role operations
  • Purview integration (GA): Protection policies, default sensitivity labels for domains, DLP policies

Variable Libraries: Finally Beyond Pipelines

Variable libraries reached GA with expanded scope—the feature Jason nearly missed due to blog heading hierarchy issues.

Previously confined to deployment pipelines (the dev/test/prod type, not data pipelines—”don’t get me started” on Microsoft naming), variable libraries now support:

  • Dataflows Gen 2
  • Copy jobs within pipelines
  • Workspace-level variable management

John explained the power: “The ability to define variables in the context of a workspace and then have all theoretically all of the items within that workspace able to read them.”

Instead of changing sources across multiple items, update one variable and all referencing items adjust automatically. Variable sets architect well for deployment pipeline scenarios with environment-specific configurations.

“It’s far from all of the items in a workspace, which would be the ultimate vision here, but we have more,” John noted.

Stephanie Bruno (who Jason invited for future FabCon coverage) identified dataflows as massive in this release—variable library support being one piece of that puzzle.

The Buried Lead: Fabric Multitasking UI Overhaul

Titled “Fabric Multitasking Gets Developer Friendly Upgrade,” the announcement undersold what John called the post’s biggest change: “How about we just change that title to ‘We’ve completely changed the way the UI works’?”

Jason’s discovery: “When I went in, I started playing with this before I got to this part of the blog post, I’m like, ‘What in the world is going on?'”

The New Experience

Available only in the Fabric UI (app.fabric.microsoft.com, not app.powerbi.com), the transformation resembles Visual Studio Code, Notepad++, and—most importantly—Microsoft Teams tabs.

Opening multiple artifacts creates tabs across the top rail beside the Fabric logo. Each workspace gets numbered icons in distinct colors—artifacts from that workspace inherit its color coding.

“It looks like tabs… everything starts going along the top ribbon, the top rail,” Jason described. “This is opening up in the same window, multiple tabs across the top.”

Key improvements:

  • No more 10-artifact cap: The previous limit disappeared, though new limits remain unspecified
  • Persistent state: Jason left tabs open overnight; the first artifact took a moment to refresh, others loaded instantly
  • Workspace context retention: Opening items from the same workspace via flyout doesn’t replace main window content
  • Color coding: Visual distinction between workspaces via tab colors

John highlighted behavioral changes requiring adjustment: “If you pick another workspace, then it will update that main window because you haven’t already opened it.”

Jason’s assessment: “Pretty darn well done… I’m sure I’m going to find things I dislike about it because I’m a contrarian at heart when it comes to ‘you’ve moved my cheese,’ but at first blush, I like it.”

The developer-friendly framing makes sense—it mirrors IDE patterns—but the Teams comparison suggests broader accessibility beyond technical users.

User-Defined Functions Reach GA with Major Enhancements

UDFs graduated to GA with significant capability additions. John’s framing: “The fabric version of stored procedures kind of, running in Python.”

They underpin transactional task flows in Power BI (enabling write-back scenarios) and now support:

  • Test mode: Validate before publishing
  • OpenAPI spec generation: Automatic swagger-equivalent documentation
  • Async functions: Multitask with async capabilities
  • Pandas dataframes: Not just Spark dataframes
  • Notebook integration (GA): Built-in UDF capabilities via notebook utils

Notebook integration spans four languages: Python, PySpark, Scala, R. Powered by Apache Arrow, it provides “seamless compatibility with existing Pandas workflows and scalability to process large-scale datasets.”

Jason admitted: “This is something I haven’t gotten an opportunity to dive into deep enough… with this being GA, it feels like it’s the right time.”

He’d been holding ADA’s transactional task flows post open for months—now the GA status removes hesitation.

Materialized Lake Views & The Refresh Question

Materialized lake views gained features that raised architectural questions for Jason.

New capabilities:

  • Smart refresh determination: Automatically chooses incremental, full, or no refresh
  • Lineage visibility: View dependencies across Lakehouse
  • Custom environments: Configure refresh context
  • On-demand refresh: Manual triggering beyond scheduling

Jason’s concern: “The whole concept of refresh here brings into question. When I think of materialized views, I don’t think of refreshing.”

John contextualized: “If you’re used to Kusto, you don’t even think about it, it just happens automatically behind the covers.”

The scheduled refresh model suggests architectural differences from Event House’s near-instantaneous synchronization. Jason acknowledged: “Slightly different architecture than what I thought it was… definitely something that we got to think slightly differently, but I think it’s positive.”

John emphasized performance benefits: “These good features… goes a long step into improving performance of Lakehouse just generally for querying purposes.”

Python Notebooks, Notebook Utils & Resource Monitoring

Pure Python notebooks (not PySpark) reached GA—perfect for smaller workloads without spinning up entire Spark environments.

Jason’s personal excitement: his son’s learning Python in computer science class. “I’ve been helping him with his homework lately and I’ve been happy with my level of recall… I’m going to turn him loose on some of these Python notebooks here in Fabric over the summer.”

The teacher “talks at 1.7 speed”—faster than Jason’s usual 1.25x listening rate—but the coursework provides excellent foundation for Fabric experimentation.

Notebook utils APIs reached GA, providing:

  • Parallel notebook execution: Run multiple with run_multiple
  • Fast copy library: Optimized data movement
  • CRUD APIs: Full create/read/update/delete for notebook items and Lakehouse
  • Runtime context: Session information (node count, etc.)

Jason’s use case: “I do have some developers doing external work for some of the tools that we’re building that are outside of Fabric… I’m excited to be able to call into and get data out of Fabric.”

Python notebook real-time resource usage monitoring arrived—but only for pure Python notebooks, not PySpark. Jason lobbied: “I’d love to be able to pop out a pane to see what my notebook used… I have an idea of what my resource usage is… maybe I need to move that or… I can spin down my capacity a little bit.”

Querying Mirrored Databases in Spark: Huge for Jason

New capability letting Spark notebooks query mirrored databases directly eliminated painful workarounds.

Previously: “You’d need to have a Lakehouse, you’d create a shortcut to the fabric item, the mirrored database… to gain access,” John explained.

Now: “You can have a notebook that doesn’t necessarily have to have a lakehouse attached to it. You can just work on that data directly.”

Jason’s timing: “I just ran through this three weeks ago… in order to use a mirrored database, I had to do the Lakehouse model and it was painful. This is going to eliminate a whole step.”

No more creating Lakehouse solely for mirrored database access in notebooks.

Lakehouse Explorer Download, Multi-Lakehouse Support & Naming Confusion

The Lakehouse Explorer UI (not OneLake Explorer—”Microsoft naming”) gained file download capability (upload existed previously) and multi-Lakehouse support.

Users can now pin multiple Lakehouses into the left pane, jumping between contexts. John: “It looks like now it’s a tool that is independent of the lakehouse it came from… that’s the direction we’re heading.”

The duo engaged in a 20-minute circular argument about this feature yesterday—Jason reserved the right to revisit in future episodes but demonstrated restraint: “I said I wasn’t going to go there. I’m not. Look at me, holding my work.”

Download requires admin enabling OneLake data export—a sensible security control.

Spark Monitoring, Run Series Analysis & Application Comparison

Three GA announcements for Spark performance:

  • Monitoring APIs (GA): Programmatic visibility into Spark environments
  • Run series analysis (GA): Track performance trends across job runs—detecting slowdowns, improvements, anomalies
  • Application comparison (GA): Compare runs from different application versions

Jason prioritized application comparison: “More important… when I really want to see what my performance is for my application.”

The feature analyzes metric deltas between baseline and actual runs, providing insights into execution time, IO trends, resource utilization—critical for optimization work.

“We made a couple of tweaks and now we went from a 10-minute run to a 45-second run… was it really that we were that inefficient or we were just that much more efficient?”

Fabric Extensibility & MCP Server

Developer-focused announcements included:

  • Fabric extensibility toolkit (preview): Formerly workload development kit—Mike Carlo’s using it extensively for Power BI Tips products
  • Fabric MCP server (preview): Model Context Protocol implementation—”swagger for AI”

John explained MCP: “I can build an MCP on my data… this is an MCP server provided by Microsoft that essentially articulates or tells AI how to interact with fabric objects.”

Example workflow: “Plug into your IDE… plug into GitHub Copilot and say, ‘Hey GitHub Copilot, go provision me a fabric data warehouse’ and it will understand how to go and do that.”

Workspace-level workload assignment let workspace admins add custom workloads without tenant admin involvement—crucial for large organizations where “you are never going to know who your fabric admin is.”

Looking Ahead: Part Two & Power BI

At 45 minutes, they called it—covering Fabric platform, OneLake, and data engineering sections. Next episode tackles:

  • Data science
  • Data Factory (Jason teased: “It’s pretty cool… there’s a lot here”)
  • Remaining sections
  • Separate Power BI coverage

Jason’s preview for impatient listeners: “If you haven’t taken a look at the data factory section… start to look at it and then you can come back and hear our opinion on it.”

The September release demonstrated Microsoft’s maturation—consolidating announcements for digestibility while delivering substantive improvements from UI transformation to performance tooling. The buried leads (multitasking UI, variable libraries expansion) mattered more than headline features, while GA graduations (UDFs, Python notebooks, monitoring APIs) signaled production readiness for capabilities developers have tested for months.


Links:

Subscribe: SoundCloud | iTunes | Spotify | TuneIn | Amazon Music


2 Replies to “Episode 306 – Fabric September 2025 Part 1: The Multitasking UI Revolution, Variable Libraries GA & User-Defined Functions”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

AvePoint