Episode 307 – Fabric September 2025 Part 2: Merge Comes to Warehouse, Data Agents Everywhere & The Activator Speed Boost
Hours after showering, trimming his beard, and re-architecting customer solutions based on Part 1 discoveries, Jason rejoined John for the September 2025 marathon’s second installment—covering data science, the game-changing merge statement in warehouse, fabric SQL improvements, real-time intelligence maps (that maybe should be Power BI?), and why “10 times more powerful” really meant “10 times faster.”
Episode 307 – Fabric September 2025 Part 2: Merge Comes to Warehouse, Data Agents Everywhere & The Activator Speed Boost
Hours after showering, trimming his beard, and re-architecting customer solutions based on Part 1 discoveries, Jason rejoined John for the September 2025 marathon’s second installment—covering data science, the game-changing merge statement in warehouse, fabric SQL improvements, real-time intelligence maps (that maybe should be Power BI?), and why “10 times more powerful” really meant “10 times faster.”
Playing With Yesterday’s Announcements
Jason reported hands-on experimentation since Part 1: variable libraries tested, OneLake Catalog security tab still missing, and mirrored databases in Spark notebooks proving “huge” for current projects.
“I actually went in and started messing with… some of the things we’re going to talk about here,” Jason explained. The multi-Lakehouse experience remained unconvincing—”I’m going to parlay that to another conversation”—but mirrored database access eliminated painful workarounds.
Mid-conversation realization: “I had a quick chat… oh my goodness, I think this just saved me a couple of steps and a replication of data.” The feature John described as “the same underlying mechanism” enabled direct notebook access without Lakehouse intermediaries.
Data Agents: Mirror Support, CI/CD & The Copilot Studio Problem
Fabric data agents gained mirrored database support—”definitely going to come into play with some of the things I’m talking about,” Jason noted, describing agent work happening that week.
But Copilot Studio integration frustrated him: “I’m watching the calls go across and it feels like it just sort of loses the plot after a little bit… makes the calls to the data agent great at the beginning, and then it kind of forgets that’s a source that it has access to.”
John diagnosed: “Sounds like a problem with Copilot Studio, not so much the agent.”
The same queries worked perfectly in standalone Copilot within Fabric—the issue emerged only when exposing agents through Copilot Studio to SharePoint or Teams. “If the only way I’m able to do that is through Copilot Studio to be able to expose it elsewhere…”
Additional data agent updates:
- CI/CD support: “Table stakes,” John declared—everything needs CI/CD before GA
- Python SDK client consumption: External application integration via Python
- Example query feedback: SDK-level prompt refinement guidance
- Query influence discovery: Understanding which examples shaped agent responses
- Diagnostic downloads: Detailed performance analysis for tuning
Jason’s frustration: “If we can make this work from Python client SDKs, why can’t we make it work in Copilot Studio consistently?”
The answer hinted at Azure AI Foundry workarounds—”Problem is that costs more money and it’s a different thing, more maintenance. I really want Copilot Studio to step up and do this for me.”
Data Wrangler: AI Functions Return (Sort Of)
Data Wrangler gained AI function capabilities (preview)—sentiment analysis, language translation, and other features previously available in Power Query for Power BI Desktop before removal.
John framed it: “An attempt to put the low code face on a Python notebook… looks a little bit like Power Query, but… writes Python for you as opposed to Power Query which writes M.”
Jason’s assessment: “I’ve outgrown that.” With Cursor, Copilot, Claude, or ChatGPT readily generating code from instructions, Data Wrangler’s wizard interface added friction rather than value for experienced users.
“When I can give an instruction… and say, ‘Hey, write me some code to do X, Y, and Z,’ and then I can manipulate that beyond it with my own knowledge, I find that Data Wrangler… I’ve tripped over myself more.”
John acknowledged the audience: “There’s those who find it useful… you can learn it, then you won’t need it anymore.”
The AI functions resurrect capabilities that disappeared from Power BI Desktop—now surfacing in Fabric’s data engineering context where they arguably belong. Data Wrangler runs pure Python (not PySpark), handling translation when returning to notebooks.
Jason’s personal connection: His son’s computer science class homework had him relearning Python fundamentals. “Starting to go back to the beginning of language and syntax… they don’t call it the language, they call it vocabulary to try and mirror what they’re used to.”
Data Warehouse: The Merge Statement Game-Changer
Only two warehouse updates, but John called one “a game changer”: the merge statement reaching general availability.
“Who cares about Merge? Merge is… basically a conditional insert or update,” John explained. “Update the value of this record, and if you don’t have it, insert it.”
The upsert pattern—fundamental to SQL workflows—had been conspicuously absent from Fabric Warehouse.
Jason anticipated team reactions: “Some of the folks I work with are huge Warehouse fans… I can hear the sneer, ‘Oh, so why do we even need to bother with your Lakehouse anymore, Jason?'”
He acknowledged the significance: “This brings a huge component to the table… some of the things I’m trying to accomplish with Azure SQL, Fabric SQL… now I can just use Warehouse for some of that because the functionality isn’t there in Fabric SQL yet.”
Warehouse GA versus Fabric SQL preview status made the choice clearer. Jason’s current projects depend on not knowing whether records exist—merge solves that elegantly.
John clarified his Lakehouse advocacy: “I don’t have a problem with Warehouse at all. I just don’t know why you would use it if you’ve got Lakehouse and you’re in a new environment and you’re not a SQL person.”
The second update: Migration Assistant (GA) for moving from Synapse Analytics dedicated pools or SQL Server databases to Fabric Warehouse using DAC Pack files.
“I’ve never used DAC Pack files either. I’ve used BACPAC files, but never DAC Pack,” Jason admitted, pronouncing carefully: “D as in dog versus B as in boy.”
The SQL Server database reference typically spans on-premises and Azure SQL—Jason planned testing whether this provided an Azure SQL to Warehouse migration path for current work.
Fabric SQL: SSMS Button, Query Improvements & ARM Woes
The databases section delivered numerous quality-of-life improvements, starting with an SSMS launch button in the query editor ribbon.
“In demos I like to fire up SQL Management Studio… to show that, hey look, you can use SSMS,” John explained. Now a button handles connection automatically—if you have SSMS installed.
Jason’s pointed question: “How’s that working in your VM on your Mac right now?”
“Not working at all because bloody SSMS doesn’t support ARM as of today. SSMS 21 specifically, and 2020,” John confessed.
Jason had demonstrated the feature beautifully in Branson client environments but couldn’t show the class: “Let me update and put it on my VM. Nope… it just doesn’t fire. It lets you install, lets you download and install the whole thing. It doesn’t work at all.”
Critical clarification: “That’s an SSMS problem, not a databases and fabric problem… this feature works beautifully.”
Additional query editor improvements:
- Delete multiple queries at once: Previously one-at-a-time tedium
- Share queries across teams: Collaboration support
- Copilot read-only vs. read-write modes: Suggestion-only or full query generation
Visual Studio Code SQL extension gained Fabric integration—no more “handstands and tricks” to connect Fabric SQL databases. Entra ID authentication, workspace connectivity, and Fabric SQL database provisioning work natively.
Point-in-time restore precision recovery extended from 7 to 35 days retention—”Here’s the info, here’s the link,” Jason appreciated the brevity.
Git integration improvements for system object references and shared queries appeared via Azure DevOps screenshots (GitHub presumably supported too). Object references validate locally now; shared queries track over time.
REST API import/export for database definitions continued the GA march—”Things you might expect should already be there, but they’re getting there,” John noted.
Performance dashboard memory consumption metrics joined CPU usage monitoring—essential visibility as SQL database approaches GA.
Real-Time Intelligence: Maps, Monitor Integration & The Power BI Question
RTI introduced Maps as a new fabric item (preview)—distinct from Power BI’s map visuals.
John’s nostalgia: “Remember Power Maps?… over 10 years ago in Excel… you could load it into Power Pivot and then use Power Maps to visualize it and animate it over time.”
The new implementation connects to Event House or Lakehouse data via KQL queries behind the scenes. “Heat maps on the globe… it does look interesting.”
Jason’s confusion: “I don’t understand… realtime has its own dashboarding. Now we’re looking at maps. I don’t understand why we’re not seeing this… it feels like it should be in Power BI.”
John defended the distinction: “Power BI has got the maps for analytics purposes. The RTI workload is dealing more with monitoring scenarios… current state data versus big data over time.”
Jason pushed back: “I may call it a cop out… They could have called it a new type of Power BI report… another place to put visuals and we’re going to have to plumb through in a different way.”
Mobile access became the clincher: “In real-time I would think mobile, there’d be a use case for that.” Maps aren’t mobile-compatible currently.
The documentation confused data source expectations: “Users can ingest location data from a Lakehouse or Event House.” Not limited to Event House streaming scenarios.
Jason’s reading: “Visualize it instantly and build map-centric applications without specialized knowledge or writing code. Golly, doesn’t that feel like a Power BI report?”
Despite reservations about siloed visualization experiences, Jason clarified: “I am not in any way trying to poo-poo this. I’m excited about the idea… I just don’t have a use case for it today.”
Azure Monitor Logs integration via Event Streams enabled application telemetry (App Insights, Log Analytics) flowing into Fabric. Real-time dashboards already connect directly to Azure Monitor—this provides Event House ingestion for mashing up with other Fabric data.
“If you wanted to mash it up with other data you might have in Fabric would be a very good use case,” John explained.
Workspace private link for Event Streams cleared security boundary hurdles—”One of those things that has to be there for completeness of the story,” Jason noted.
Activator: 10x Speed (Not Power)
The headline “Activator just got 10 times more powerful in preview” prompted Jason’s critique: “Does faster mean more powerful?… When I read powerful, I thought we’re getting more actions we can do… I can connect this up in different ways. It’s the same activator that we’ve had, just 10 times faster.”
“‘Activator just got 10 times faster’ would’ve been… a great headline… and I wouldn’t have been as disappointed.”
John countered: “If you can do more in the same amount of time, arguably it’s more powerful.”
“I’m okay with that, John. We’re reading the news… I’m being pedantic… sharing my perspective on it.”
The performance improvement requires no configuration—”Just sit back and enjoy the 10x performance improvement”—raising Jason’s taxonomic concern: “Activators not in preview, is it?… If there’s nothing, you don’t have to do anything… why is it preview?”
Activator itself reached GA previously. The preview designation remained mysterious for a passive performance enhancement.
Anomaly detection in Activator (actually new, actually preview) applies pattern recognition to data streams, alerting when metrics deviate from learned baselines without hard-coded rules.
Jason’s realization: “Do you think you can attach an activator to that?”
“Yeah, it uses activator.”
“So it’s not just anomaly detection, it’s anomaly notification too… instantly push Teams messages or emails when anomaly occurs.”
The combination matters: “The anomaly detection is cool, but unless it notifies me of it, it’s only as good as me going off and checking.”
Cliffhanger: Data Factory Awaits
At 30 minutes, facing Data Factory’s substantial updates, they called it.
“I feel like I’m teasing an audience here,” Jason lamented, having pre-recorded an intro promising complete coverage.
John’s assessment: “We’re going to be too rushed if we try to do this the whole time.”
Jason’s team message captured the release’s character: “Not planet-shifting stuff, but instead making fixes to things that were neither bad nor just okay for a while and adding lots more functionality… nothing here that is exploding stuff, but there’s so much here that is making everything so much better or will.”
No single announcement rivaled Fabric’s initial reveal or databases introduction—but cumulative improvements (merge statements, mirror database access, variable libraries escaping pipelines, multitasking UI, CI/CD integrations) transformed workflows incrementally.
Data Factory remained for Episode 308, alongside Power BI’s separate release notes. The September marathon continued—digestible chunks replacing rushed coverage, giving each feature its due consideration.
Links:
- Microsoft Fabric September 2025 Feature Summary
- Episode 306: Fabric September 2025 Part 1
- FabCon Vienna 2025 Keynote day 1
- FabCon Vienna 2025 Keynote day 2
Subscribe: SoundCloud | iTunes | Spotify | TuneIn | Amazon Music

