Episode 298 – Fabric June 2025: Notebooks Get Variable Libraries & Event Streams Level Up
Recording on a somber note following devastating flash floods in nearby Kerrville, Texas, Jason and John nevertheless pressed forward with the Microsoft Fabric June 2025 feature summary—a release heavy on notebook improvements, real-time intelligence enhancements, and cost reductions for AI workloads.
Personal Note: Community Impact
Before diving into features, Jason shared the very real impact of the Kerrville floods affecting their Texas community. While his immediate circle remained safe, the tragedy touched adjacent connections—including members of their gym who perished. The reminder about flash flood dangers and the swift community response from local grocers underscored technology’s role in daily life, but also its proper place behind human safety and wellbeing.
For those able to help, verified relief funds remain active for affected families.
Notebooks Finally Get Variable Libraries
The most anticipated notebook update arrived: variable library integration now in preview. John called it “kind of critical really—all elements are going to need to use it.” Pipelines had variable library support first; notebooks join the party using the notebook utils library.
Jason emphasized this matters most for his workflow: “This is where I’m going to use it most honestly—in notebooks and across the board.” The capability enables proper dev/test/prod scenarios without hardcoding values, addressing a long-standing gap in notebook reusability.
To leverage the feature, developers declare the notebook utils library and call functions to retrieve variable values within code. It’s exactly the functionality data engineers have been requesting since variable libraries launched.
Version History Goes GA
Version history for notebooks reached general availability, though Jason raised concerns about the five-version limit they discovered while testing semantic models. The duo questions whether explicit saves pin versions separately from auto-saves (which occur every five minutes), or if everything rolls off chronologically.
The feature provides side-by-side comparison capabilities—useful for tracking changes and understanding what broke. Jason particularly appreciates historical tracking: “I love the ability to go back and see exactly what was going on at the time.”
Documentation suggests notebook version history may retain more versions than the semantic model implementation, but the team needs further testing to confirm limits and behaviors.
Copilot Auto-Completion Arrives
Notebooks gained copilot auto-completion in preview—inline code suggestions while typing, similar to Visual Studio Code. The feature includes a toggle switch for those who find it distracting.
Jason noted he likes it on desktop but struggles with mobile implementations (though notebooks on mobile seems unlikely anyway). The team included the feature description twice in the blog post, prompting John’s quip: “The team really likes this feature because the section that describes it is in here twice.”
T-SQL Notebooks Mature
T-SQL notebooks with monitoring improvements hit general availability, bringing recent run history and scheduling capabilities up to parity with other notebook types. These notebooks work exclusively against Data Warehouse, leveraging the T-SQL engine rather than Spark.
A new preview feature lets T-SQL and Python notebooks run against SQL Data Warehouse—but John clarified this means true Python notebooks (lightweight, single-node) rather than PySpark. Developers use a %SQL directive to embed SQL code within Python notebooks, providing language flexibility against the warehouse.
Better Notebook Creation Flow
Creating new notebooks got significantly better with prompted naming and location selection upfront—including folder support. Previously, notebooks silently saved as “Notebook 1” or “Notebook 2” until users hunted them down later.
“This isn’t exclusive to notebooks,” John noted. “There are a number of fabric artifacts that are like this, and we’re seeing changes in that department as well.” Users can now assign notebooks to tasks during creation, though neither host uses that feature yet.
AI Functions Get Cheaper & Faster
Data science updates centered on AI function improvements: better performance and lower costs through GPT-4o mini integration. The upgrade requires Fabric Runtime 1.3, which pre-installs the necessary libraries—no manual additions needed.
AI functions also expanded beyond PySpark notebooks to pure Python notebooks using pandas dataframes, increasing their ubiquity across the platform.
Jason acknowledged data science sits “a little bit in the deep end from our base skillset,” asking for listener grace when they occasionally trip up on terminology. The naming confusion continues—AI functions differ from fabric data agents (formerly AI skills), creating a data dictionary requirement for anything prefixed with “AI.”
Real-Time Intelligence Enhancements
The RTI section packed substantial updates, starting with Azure Monitor integration improvements. Application Insights and Log Analytics—both backed by Kusto—now connect more natively to KQL Query Sets within Fabric.
John referenced multi-year frustrations with Azure Monitor limitations, noting you could previously address instances as Kusto clusters but with significant wiring overhead. “What’s new here is in the KQL query set within Fabric, get data just basically gives you the option to go and connect to one of these and have them available as another event house.”
Cross-cluster queries, append statements, and other operations now work directly. John wishes for true data retrieval (not just in-place connections) and native Event Stream sources for Application Insights—currently possible but requiring extensive configuration through Event Hubs.
Event Stream SQL Operator
The most powerful Event Stream addition: an SQL operator for custom real-time transformations. John had been checking repeatedly and it finally appeared mid-week.
“If one of the out of the box capabilities don’t do it for you, you can probably get the job done in SQL,” he explained. The operator handles joins, aggregations, and complex transforms across multiple streaming sources.
One limitation frustrated John: the SQL operator must follow the first amalgamation node, not attach directly to individual data sources. “If you can’t identify your data sources from each other based on their data, you’re out of luck.” He wants source identification columns appended automatically—a feature request for ideas.fabric.microsoft.com.
The SQL operator includes its own dedicated UI that John found “quite powerful” during initial testing.
Multi-Schema Support
Event Streams gained multi-schema support—previously a blocking limitation. The system formerly derived schema from the first data chunk and locked users into that structure. Different event types with varying schemas (common in Event Hub scenarios) caused problems.
Now Event Streams infer multiple schemas, letting users branch flows based on schema type. John noted this likely relates to the Confluent Cloud Kafka update supporting schema registry decoding, though he lacks deep Kafka expertise.
Copilot Reaches Real-Time Dashboards
Copilot arrived for real-time dashboard creation—not in KQL Query Sets (where it already existed), but specifically when building dashboard tiles. Users can now write natural language requests to generate KQL queries behind visualizations.
For Jason, this represents a transformative capability: “Sitting down to write a real-time dashboard, man, that’s a daunting task. This was the thing that made copilot real for me with Power BI reports—being able to say ‘give me a report that does X’ and have it give me a 60 to 80% solution.”
The ability to iterate on tiles through natural language dramatically lowers the KQL dashboard entry barrier.
Sharing KQL Queries
A new sharing feature lets users create links to KQL queries with three clipboard options: a Fabric link, the query code, or the formatted results. Jason sees value John initially missed—sharing specific code segments with validation that they work, rather than pointing someone to a query set with 200 other operations.
“When you send me a link to a KQL query set, John, you may have done 200 other things in there,” Jason noted. Isolated code sharing streamlines collaboration, particularly for troubleshooting and knowledge transfer.
John countered with his real desire: folders for query set tabs. Jason deadpanned: “But then you’re going to want folders for your folders.”
Other Updates
Additional noteworthy changes included:
- Materialized Lake views entered preview—similar to semantic model materialized views but over Lakehouse data
- Result set caching came to Data Warehouse in preview, serving repeated queries from memory
- Azure Data Factory items reached GA in Fabric, mounting factories from Azure with Git enablement support
- Managed private endpoints for Event Streams became generally available (not all regions yet)
The duo skipped detailed coverage of warehouse and data factory updates, noting lighter months in those areas.
Looking Ahead
With summer typically bringing quieter release cycles, John and Jason remain curious about July’s updates. Jason expressed excitement about Mike Carlo’s work with Power BI Tips leveraging workloads—initially skeptical about the ISV workload approach, he now finds their implementation “really neat” and hopes to explore further before Atlanta in August.
June delivered focused improvements to daily-use features: notebooks becoming more enterprise-ready, Event Streams gaining serious transformation power, and copilot expanding into real-time scenarios. As John summarized: “It’s a good month.”
Links:
- Microsoft Fabric June 2025 Feature Summary
- Fabric Roadmap
- Episode 297 – Microsoft Fabric June 2025 Feature Summary
- Submit Feature Ideas
Subscribe: SoundCloud | iTunes | Spotify | TuneIn | Amazon Music


One Reply to “Episode 298 – Fabric June 2025: Notebooks Get Variable Libraries & Event Streams Level Up”