Deployment Pipelines in Fabric: The Right Way to Handle Lakehouse Shortcuts
Learn how to use Microsoft Fabric’s Variable Libraries to keep Lakehouse shortcuts environment-specific without overwriting them — plus why using environment-agnostic names simplifies your deployments and code.

Introduction
In a previous article, I answered a question from one of my readers: “Why do my deployment pipelines override lakehouse shortcuts?”

At the time, I promised to publish a follow-up with a detailed, step-by-step walkthrough. Well, here it is — I’m fulfilling that promise!
Setting the stage
One of the key challenges when working with lakehouse shortcuts in Microsoft Fabric is managing environment-specific connections — for example, ensuring your Dev shortcuts point to Dev data, and your Prod shortcuts point to Prod data, that’s where Variable Libraries come in! By using variable libraries each environment keeps its own values without overwriting shortcuts during deployment.
This new Fabric item just moved from preview to general availability as of September 30th, 2025, Ref. Fabric September 2025 Feature Summary
Solution
To make the explanation easier to follow, I’ve recorded a full step-by-step video tutorial where I demonstrate the complete setup — from creating connections and shortcuts to using variable libraries and deployment pipelines.
What You’ll Learn
✅ Configure Fabric connections for each environment
✅ Create and manage Variable Libraries (now generally available!)
✅ Link them to your Lakehouse shortcuts
✅ Deploy confidently without losing your environment settings
One additional note on shortcuts names
When creating shortcuts (or even databases), it’s best not to include environment names like -dev, -test, or -prod in their naming convention.
From a DevOps perspective, what defines whether a shortcut belongs to Dev, Test, or Production is the connection behind it, not the shortcut name itself.
For example, suppose you have the following line in a notebook:
df = spark.read.format("csv").option("header", "false").load("Files/csvfiles/*"
)
By keeping the shortcut name environment-neutral (csvfiles
instead of csvfiles-dev
), you can reuse this exact same code in Dev, Test, or Prod. The shortcut automatically points to the correct data source based on its connection configuration.
If, on the other hand, you name your shortcuts with environment suffixes, you’d need to parameterize your code like this:
df = spark.read.format("csv").option("header", "false").load(f"Files/csvfiles-{environment}
/*")
This adds unnecessary complexity and introduces another variable to manage — one that can easily be avoided
Call to action
As always, I hope this helps you streamline your Microsoft Fabric workflows and better understand the platform’s evolving capabilities.
I’d love to hear from you — feel free to share your thoughts, questions, or experiences in the comments below.
Your feedback not only helps others in the community but also inspires new topics for future tutorials!