“Database migrations made easy” and “Version control for your database” are a couple of headlines you will find on Flyway’s official website. And let me tell you this, those statements are absolutely correct. Flyway is a multi-platform, cross-database version control tool with over 20 supported databases.
From all my years of experience working as an Architect for monolith and cloud-native apps, Flyway is by far the easiest and best tool on the market to manage database migrations.
Whether you are an experienced data professional or starting to get involved in the world of data, this article is the foundation of a series that will get you through this fantastic journey of database migrations with Flyway.
Background history
Flyway was created by Axel Fontaine in early 2010 at Google Code under the Apache 2.0 license. According to Axel’s words, it all started when he searched for a tool that allows integrating application and database changes easily and simply using plain SQL. To his surprise, that kind of tool didn’t exist, and it makes total sense to me because there were not many options back at that time.
Just to get you in context of what I’m talking about in the previous paragraph, everything we know as DevOps today was conceived around 2009. David Farley and Jez Humble released the recognized “Continuous delivery” book in 2010. Therefore, Axel was, without question, a pioneer in deciding to write his own tool to solve this widespread software development problem: Make database changes part of the software deployment process.
Flyway acceptance was great among the developer community, leading to high-speed growth and evolution. For example, the list of supported databases grew, additional support to multiple operating systems was added, and many more features were included from version to version.
The next step in Flyway’s evolution was Pro and Enterprise editions’ launch back in December 2017, which was a smart decision to secure the project’s progression and viability. Without question, Flyway was already the industry-leading standard for database migrations at that time.
Around mid-2019, Redgate Software acquired Flyway from Axel Fontaine. Redgate’s expertise in the database tooling space opens the door to Flyway for new opportunities in expansion, adoption, and once more evolution!
Database migrations
You are probably already familiar with the term Database migration which can mean several different things within the context of enterprise applications. It could mean to move a database from one platform to another or move a database from a previous version of the DBMS engine to the most recent one. Another common scenario these days is moving a database from an on-premises environment to a cloud IaaS, PaaS solution.
This article is not related to any of these practices mentioned above. This article will get you started with database migrations in the context of schema migrations. Yes, this is another kind of database migration which means the practice of evolving a database schema with incremental, reversible, and consistent changes through a simple approach. This approach enables integrating database changes with version control and application deployment processes.
Before digging deeper into this topic, I would like to address the basic requirements of database migrations. Trust me, this topic is fascinating and full of great information that will help you adopt this practice. Whether you are a software developer, database administrator, or solutions architect, understating database development practices like this is essential to become a better professional.
Evolutionary Database Design is the title of an article published on Martin Fowler’s website in May 2006. It is an extract of the Refactoring databases book by Scott Ambler and Pramod Sadalage, also released in 2006. This lecture goes above and beyond explaining the evolution of database development practices through the years, providing techniques and best practices to embrace database changes in software development projects, especially when adopting agile methodologies.
The approach described in this book sets the stage for a collection of best practices that should be followed to be successful.
DBA and developer collaboration
Software development practices like DevOps demand that people with different skills and backgrounds to collaborate closely, knocking down silos and bottlenecks between multiple teams, like the usual separation between development and operations.
In a database development effort, collaboration is crucial to the success of the project. Developers and DBAs should work in harmony, assessing the impact of the database changes proposed before implementing them. Anybody can take the initiative to start the conversations around whether the database code is optimal, secure, and scalable, or simply to make sure it is following best practices.
Version control
Without question, everybody benefits from using version control. All the artifacts that are part of a software project should be included to keep track of the contributor’s individual changes. Starting from the application code, unit and functional tests, database scripts, and even other code types such as build scripts used to create an environment from scratch, known today as Infrastructure as Code.
All databases changes are migrations
All database changes created during earlier stages of the development phase should be captured, no exception. This approach encourages treating database change files like any other artifact of the application, making sure to save and commit these change files to the same version control repository as the application code to be versioned along together.
Migration scripts should include but are not limited to any modification made to your database schema like DDL (Data definition language) and DML (Data manipulation language) changes or data correction changes implemented to solve a production data problem.
Everybody gets their own instance
It is very common for organizations to have shared database environments. This scenario is often a bad idea due to the imminent risk of project delays caused by unexpected resource contention problems. Or, in other cases, delays are caused by interruptions made by the development team itself. A person working on some database objects modified the objects that were part of a last-minute database schema refactoring.
Everyone learns by experimenting with new things. Having a personal workspace where one can endeavor to explore a creative way to solve a problem is excellent! More importantly, being able to work free of interruptions increase productivity.
Leveraging technologies like Docker containers to create an isolated and personal database development environment/workspace seems like a good way to resolve this issue. Other solutions like Windows Subsystem for Linux (WSL) take this approach to a whole new level, providing an additional operating system on top of the Windows workstation.
Leverage continuous integration
Continuous Integration —CI, for short— is a software development practice that consists of merging all changes from a developer’s workspace copy to a specific software branch.
Best practices recommend that each developer should integrate all changes from their workspace into the version control repository at least once a day.
There is a plethora of tools available to set up a continuous integration process like the one recommended above. The one to choose depends on the size of the organization and budget. The most popular are Jenkins, Circle CI, Travis CI, and GitLab.
According to the theory behind this practice, there are few key characteristics a database migration tool should meet:
All migrations must have a unique identifier
All migrations must be recorded in a migration history table
All migrations should be repeatable and reversible
All these practices and characteristics sound attractive to speed up a database development effort. However, the question is: How and what can we use to approach database migrations easily? Worry no more, Flyway to the rescue!
What is Flyway?
Flyway’s official documentation describes the tool as an open-source database migration tool that strongly favors simplicity and convention over configuration designed to facilitate continuous integration processes for any database on any platform.
Migrations can be written in plain SQL, of course, as explained at the beginning of this article. This type of migrations must follow the specific syntax rules of each database engine such as PL/pgSQL for PostgreSQL, T-SQL for SQL Server, PL/SQL for Oracle, etc.
Flyway migrations can also be manually executed through its command-line client or programmatically using the Java API, Docker containers, or Maven and Gradle plugins.
It supports more than twenty database engines by default. Whether the database is hosted on-premises or cloud environment, Flyway would not have a problem connecting by leveraging the included JDBC driver library shipped with the tool.
Flyway folder architecture
At the time of this writing (December 2020), Flyway’s latest version is 7.3.2. which has the following directory structure:
* Screenshot is taken from Flyway official documentation
As you can see from the folder structure, it is very straightforward; the documentation is so good that it includes a brief description for some of the folders. Let’s take a look in-depth look and define each one of these folders.
The conf folder is the default location where Flyway will look for the database connectivity configuration. Flyway uses the simple key-value pair approach to set and load specific configurations via the flyway.conf file. I will address the configuration file in detail in future articles; for now, I will stick to this simple definition.
Flyway was written in Java, hence the existence of JRE and lib folders. I strongly recommend leaving those folders alone; any modification to the files within these folders will compromise Flyway’s functionality.
The licenses folder contains the teams, community, and third-party license information in the form of a text file; these three files are available for you if you want to take a look and read all details about each type of license.
The drivers folder is the place where all the JDBC drivers mentioned before can be found in the form of jar files. I believe this folder is worth to be explored in detail to see what is shipped with the tool in terms of database connectivity through JDBC.
I will use my existing Flyway 7.3.2 environment for macOS. I’ll start by verifying my current Flyway version using the flyway -v command:
Good, as you can see, I’m on the 7.3.2 version. This is the same version used from the official documentation screenshot that describes the folder structure. Now, I will find the actual folder where Flyway is installed using the which flyway Linux command:
Using the command tree -d, I can list all folders inside the Flyway installation path:
Then I simply have to navigate towards the drivers folder and list all files inside this path using the ls -ll Linux command:
Look at that long list of JDBC drivers in the form of jar files; right of the box, you can connect to the most popular database engines like PostgreSQL, Microsoft SQL Server, SQLite, Snowflake, MySQL, Oracle, and more.
Following the folder structure, there are the jars and sql folders where you want to store your Java or SQL-based migrations. Flyway will look at these folders by default to automatically discover filesystem (SQL scripts) or Classpath (Java) migrations. Of course, these default locations can be overridden at execution time via a config file and environment variables.
Finally, there are the executable files. As you can see, there are two types: One for macOS/Linux (Flyway) based systems and one for Windows (Flyway .cmd) systems.
How it works
Take a look at the following visual example, where there is an application called Shiny Soft and an empty shell database called Shiny DB. Flyway is installed on the developer’s workstation, where a couple of migrations were created to deploy some database changes.
The first thing Flyway will do when starting this project is to check whether the migration history table exists. This example begins the development effort with an empty shell database. Therefore, Flyway will proceed to create the flyway_schema_history table on the target database called Shiny DB.
Right after creating the migration history table, Flyway will scan and apply all available migrations on its default location (jars / sql)
Simultaneously, the flyway_schema_history was updated with two new records, one for each of the migrations available (Migration 1 and 2).
This table will contain a high level of detail that will help you to understand better how the database schema is evolving. Take a look at the following example:
As you can see, there are two entries. Each has a version, description, type of migration, the script used, and more audit information.
This metadata is valuable and crucial to Flyway functionality. Because it helps Flyway keep track of the actual and future version of your database. And yes, Flyway is also capable of identifying those migrations pending to be applied.
Imagine a scenario where Migration 2 needs to be refactored, creating just one table instead of two. What you want to do is to create a new file called Migration 2.1. This migration will include the DDL instructions to drop the two existing tables and create a new one instead.
Flyway will automatically flag and update this new migration as pending in the flyway_schema_history table; however, it will not apply such migration until you decide to do it.
Once Migration 2.1 is applied, Flyway will update the flyway_schema_history table with a new record for the latest migration applied:
Notice the third record that corresponds to the database version 2.1 is not a SQL script. Hence the type column record shows JDBC; instead, this was a Java API type migration successfully applied to perform a database refactoring change.
Advantages
At this point, you should be a little bit more familiar with Flyway. I briefly described what it is and how it works. Now, stop to think about what advantages you will get, including Flyway as the central component of your database deployment management.
In software development, as with everything you do in life, the longer you take to close the feedback loop, the worse the results are. Evolving a monolithic legacy database, where any database change is performed following the state-based database deployment approach, could be challenging. However, choosing the right tool for the job should make your transition to a migration-based deployment easier and painless.
Embracing database migrations with Flyway could not be easier. Whether you choose to start with SQL script-based migrations or Java classes, the learning curve is relatively small. You can always rely on Flyway’s documentation to check, learn, and get guidance on every single command and functionality shipped with the tool out of the box.
You don’t have to worry about keeping a detailed control of all changes applied to your database for starters. All the information from past and future migrations are held with great detail in Flyway’s schema history table. This is not just a simple control table. What I like about this schema history table is the level of detail about every single migration applied to the database. You will be able to identify the type of migration (SQL, Java), who, when, and exactly what was changed in your database.
Another major paint point solved by Flyway is the database schema mismatch. This is a widespread and painful problem encountered when working with different environments like development, test, QA, and production. Recreating a database from scratch, at the same time specifying the exact schema version you want to deploy, is a powerful thing. A database migration tool like Flyway will ensure to apply all those changes that belong to a specific version of your application. Database changes should be implanted with application changes.
Conclusion
This article provides a foundation and detailed explanation of Evolutionary database design techniques and practices required to approach database migrations with tools like Flyway.
I also included a summary of Flyway as a database migration tool, starting from the early days, explaining why and how this tool was born. It finally explored its folder structure and components and provided a visual and descriptive example of how this tool approaches database migrations with ease.
Please join me in the next article series, focusing on explaining how to install Flyway’s command-line tool for Linux/macOS and Windows. Also, explore all details related to its configuration through config files and environment variables.
Since the 1970s there’s been a steady decline in the number of free-standing relational database companies until only Oracle remains. Familiar names like Sybase, Ingres, Informix, MySQL, SQL Server and others are either out of business or have been acquired.
So, it would be reasonable to believe that the database market has commoditized, and that one relational DB is as good as another (plus or minus), though today Oracle has the dominant share of the market. But if commoditization was true a few years ago, it’s certainly not now.
In the recent past, lost market share, technology hiccups, and a minor rebellion among users over pricing and related policies aimed at Oracle, helped to launch a new generation of databases from startups and well-heeled competitors like Amazon.
Trouble is, the reports of commoditization and of Oracle’s flagging market presence were greatly exaggerated. Over the last decade, Oracle has built up its flagship product to be not simply better, but much better, than the competition. It did this in several ways.
First, it developed hardware like Exadata to move big database workloads into memory. It was a predictable move, but greatly appreciated by large DB consumers because it removed the performance bottleneck of relatively slow hard disks, thus enabling databases to run as much as a million times faster. Disk drives operate at millisecond rates — and silicone runs by the nanosecond — potentially enabling that dramatic speed up.
Next, and perhaps just as important, Oracle embraced cloud computing. Even though it had supported the majority of cloud companies from the beginning, and even though Oracle developers lived with cloud customers to see how to boost performance, it was seen as a laggard because the company didn’t offer cloud applications of its own.
Today’s Business Needs
Nonetheless, Oracle learned a lot from its customers and plowed its findings back into its core product just as others were getting restless and seeking alternatives.
Competition heated up and today there are good and credible competitors for Oracle. Though Oracle, and Gartner, have scads of data documenting Oracle’s superior performance.
One big difference that Oracle exploits is that its competitors now offer multiple versions of their databases tuned to specific workloads like OLTP or analytics. It’s a good strategy that enables the company to concentrate on optimizing various functions for discreet markets.
But that’s not how we work.
Modern business has converged needs because applications like CRM continue to demand more styles of computing and database access than ever before. Back when relational databases existed to support row and column screens and reports, there was less challenge.
Today though, a CRM application might need the same data, but it will also want to know the next best offer to provide, an analytics and machine learning job. It might also require serious graphics processing to give users visual understanding of possible next steps.
In short, our business apps don’t do just one thing and our databases have to keep up. The idea of specialization makes sense in theory, but the idea loses a lot in practice — especially if a given specialty database is not as robust as the converged version.
Converged Database
That’s why I was so intrigued by one particular part of Oracle’s briefing about the new 21c version of its flagship RDB.
According to Oracle and Gartner, the Oracle Autonomous Database placed first in all four operational use cases tested — and first or second for all four analytical use cases according to Gartner’s publication, “Critical Capabilities for Cloud DBMS for Operational Use Cases.”
That’s not supposed to happen. Specialists are supposed to be better at their one thing than generalists are at trying to be all things to all people. But not here. In this case, the performance leader continues to be one of the earliest players in the market. Moreover, all of the significant resources that companies can throw at the problem (i.e. money) have not resulted in significant advantages.
Just to highlight the point, Oracle is now calling its product a “converged database” to help with differentiation by highlighting that many businesses don’t often only focus on OLTP or AI, but that their businesses require a bit of everything. You can search on the new version of Oracle’s DB, 21c for the details.
There’s no doubt that the database market is undergoing a second flowering after decades of status quo stasis, and there are some good options out there. Some of the advances had to wait for faster hardware and others needed a business case to capture part of the billions of dollars that Oracle spends on R&D every year.
To get that truism right for once, the proof of the pudding really is in the eating.
Denis Pombriant is a well-known CRM industry analyst, strategist, writer and speaker. His new book, You Can’t Buy Customer Loyalty, But You Can Earn It, is now available on Amazon. His 2015 book, Solve for the Customer, is also available there. Email Denis.
On my computer I have 2 versions of Power BI Desktop installed – one from the Microsoft Store which is updated automatically and the downloaded version from downloads – and typically I have last month edition as my downloaded version.
But in my taskbar its impossible to tell the difference between the two.
Well we can solve that by changing the icon for the downloaded version – its not possible for the store version.
If you right click the icon in the taskbar and then right click the Power BI Desktop
You can select the properties for this App.
Now click the Change Icon
This will show you the current icon and now you can change this by clicking Browse – in my case I will select the icon for the PBIDocument
And click open – then icon will now be set to this
And when clicking OK
We will see the icon has changed for the Shortcut.
Notice that it will change immediately
But after a restart it will appear
Hope this can make your choice of Power BI Desktop versions easier for you as well.
Power-CRM update 1.0.0.40 is now available. This version upgrade includes 17 bug fixes and 15 new functionality items.
Along with version 1.0.0.40, we are also releasing the Power-CRM Sales Analysis App. This optional add-on solution is a full analytics app for Power-CRM containing 10 full dashboards/reports and over 70 charts/graphs/KPI’s. See below for more information.
The Sales Analysis App. is built on Microsoft Power-BI and contains deep sales performance analytics. 10 full reports/dashboards and over 70 individual charts/graphs/KPI’s provide a complete analysis of your sales performance. The app. includes date slicers and other drill-down options as well as being fully mobile-enabled.
Standard Dashboards Added
Version 2.0 includes two new sales dashboards – Sales Manager and Sales Rep. This provides a single pane of glass to manage your sales team.
LiveChat Integration
We have added integration with LiveChat to provide your internal and external portal users to use LiveChat right inside Power-CRM. This provides users the ability to easily get support for Power-CRM from either your internal Power-CRM Admin(s) or from our Help Desk.
To enable your internal IT/CRM Admin(s) to provide support via chat, a LiveChat subscription is required.
These are only a few of the new features added in 1.0.0.40. For full release notes – visit our web site
About the Author: David Buggy is CEO & Founder of Power-CRM, Inc. and a veteran of the CRM industry with 18 years of experience helping businesses transform by leveraging Customer Relationship Management technology. He has over 16 years of experience with Microsoft Dynamics CRM/365 and has helped hundreds of businesses plan, implement, and support CRM initiatives. To reach David call 844.8.STRAVA (844.878.7282) To learn more about Power-CRM, visit https://power-crm.net
Work 365 helps companies grow and scale their recurring revenue with Subscription management and Billing Automation capabilities built on Dynamics 365.
Work 365 Version 2.7 addresses the core needs of business insights and self-service. Recurring Revenue and Subscription-based sales happen as a series of collective events that take place as customers stay engaged with a service provider.
The customer may start with one service and then continue to grow their engagement both through their own organic needs and through the expanded service offerings. This release addresses some of these needs by delivering better customer engagement through the self-service portal capabilities and with the Analytics and reporting capabilities that surface customer insights.
Work 365 version 2.7 features
Brand New UI to match the UCI interface
Enhancements to the Self-Service portal solution
Non-Recurring Billing Contract Changes
Enhanced Agreement Templates and Billing Contracts
Customer Agreements to support the Microsoft CSP requirements
New Integrations: TechData; Quickbooks Desktop
New Version 2.7 Enables Partners to add non-recurring items to their recurring billing contracts.
Non-Recurring Billing Contracts are no longer required to invoice Non-Recurring Items. Non-Recurring items can now be added to Regular Billing Contracts. NRIs that are added to regular Recurring billing contracts are billed as part of the regular invoicing cycle of the Billing Contract that it is added to.
The main difference between recurring billing contracts and non-recurring billing contracts is, if you include an NRI in a regular billing contract, the NRI will be included in the next invoice that is generated; NRIs within non-recurring billing contracts, however, are invoiced the next day. Non-recurring billing contracts can still be used for NRIs; however, this change provides more flexibility for partners to decide how they would like to manage the billing for NRIs. For resources on these items refer to:
Some of the screenshots of recurring and non-recurring billing contracts and items.
Check out these resources to learn more about Work 365!
There is a new feature in V9 that allows us to define a field type as “auto-number.” As a few Microsoft Dynamics 365 commentators have noted, there is no way as yet to set a field as auto-number from the UI. It must be done using the API.
There are a few installable solutions and external tools available that can do this for you, but that is outside the scope of this blog. Today, we’re talking about some of the things we’ve learned about the behavior of auto-numbered fields when importing data or solutions.
Early descriptions of the feature indicated that it was necessary to define a field as auto-number as it was being added to the schema. However, experimentation has shown that it is possible to add the feature to an existing text field. This means that you can add auto-numbering to a text field that already exists in your Org – you don’t need to deprecate the field and use a new one, or go through the tedious process of removing dependencies, dropping the field, re-adding it with the new data type and putting the dependencies back in.
V9 Auto-number fields are text fields with an additional property indicating that they are to have auto-numbering applied.
Adding this property does not affect any data in existing records. But it does make the field read-only. Any new records will be assigned a value by the built-in auto-number plugin. The auto-number plugin runs pre-create on the creation of a new record.
If you import a solution from an Org that has a field defined as auto-number into a D365 Org in which the same field is defined as plain text, it will add that property and the field will be updated to auto-number.
However, if you import a solution from an Org that has a field defined as plain text into an Org in which the same field is defined as auto-numbered, it remains auto-numbered and none of the numbering is affected.
Importing an auto-numbered field into a new org restarts the numbering sequence for seeding at the default of 1001 for the next added record. This occurs whether it is a new or updated field – it uses the database “sequence” function, and the solution has no awareness of the last used number in the source D365 Org. For more information on this, see the details under Set Seed Value in the Microsoft documentation referenced above.
If Imported records have data, they will retain the same value that they are brought in with. If there is no data in the field (either brought in with no data in the CSV or if the field is not present), they will be assigned sequence numbers when imported.
There is no guarantee that the numbers applied are unique. If a pre-existing record has a number that will be assigned by the feature at some point in the future, the feature will apply that number. If a record is imported with a value in the sequence number field that is also present in the database, it will generate a duplicate number (unless you defined the auto-number field as an alternate key).
Hopefully, you find these learnings helpful. Don’t forget to subscribe to our blog for more Dynamics 365 tips and tricks!
When working with integration packages or any code we sometimes need to rollback all the changes. Using TFS Labels will let you take a snapshot of the files deployed which will help you to review, build, or rollback changes easily.
This blog will show you the step-by-step process on how to apply labels.
1. Right-click on the file or folder you would like to have a label.
2. Add the Label.
If you need roll back or view the labeled version of code, you can find the label as shown below:
In the Source Control, go to Find, and then go to Find Label.
3. Type in the label you would like to find.
The Find button will display the matched list. Choose the label and retrieve the version for your files.
We are adding support in the existing version agnostic SQL Server MP (2017+) to monitor older versions of the server, specifically 2012 through 2016. SQL Server 2008 and R2 end of support is nearing (July 2019) so we will leave monitoring 2008 in its own MP. This is an update to the public preview we released earlier this year (release blog post). Please refer to that blog post for that release on the details of planned changes as it involves some naming changes and impacts custom MPs.
Please install and use this public preview and send us your feedback (sqlmpsfeedback@microsoft.com)! We appreciate the time and effort you spend on these previews which make the final product so much better.
Added a new property, called “Versions of SQL Server to be excluded”, in “SQL Server on Windows Local DB Engine Discovery” to exclude specified SQL Server versions from being discovered
Added the monitor “Database Log Backup Status” to check when the transaction log was last backed up
Added the monitor “Database Backup Status” to check the age of the last backup of a replica database
Added support for Distributed Availability Groups in SQL Server 2016 and higher
Added a monitor and performance rule to watch over the number of VLFs—“Virtual Log File Count” and “DB Virtual Log File Count” respectively
Added support for AG Configuration-Only replica in SQL Server on Linux
Added additional check in every workflow to make sure that they will not run against an unsupported SQL Server version
Improved the way workflows get information on what SQL Server namespaces are available in WMI
Updated the monitor “Product Version Compliance” in order to check instances of SQL Server 2012 and higher
Updated some display strings
Fixed: Focus jumps out of the SCOM console window when opening the Monitoring Wizard
Fixed: “DB Engine Seed” discovery doesn’t throw an error message when the action account doesn’t have enough permissions to access the required keys in Windows Registry
We are looking forward to hearing your feedback at sqlmpsfeedback@microsoft.com.
Microsoft is changing the way Microsoft Dynamics 365 updates will be delivered: major cloud updates will be deployed twice a year in April and October to provide new capabilities and functionality to all Dynamics 365 users. These updates will be backwards compatible, while regular performance updates will be rolled out throughout the year as before, ensuring business continuity for organizations.
Whereas customers were able to skip a cloud release, they will no longer be able to do so, as all users will now be required to migrate to the latest version with each new release. For this reason, all users will have to be on the new version before January 31, 2019.
What does this mean for your organization?
This new continuous update deployment will bring several benefits to users and organizations using Dynamics 365:
Organizations will enjoy up-to-date features and optimized performance. A cloud-based solution ensures immediate access to new functionalities, but organizations sometimes delay their migration to new versions. By ensuring that everyone is using the latest version, users will be able to take advantage of the latest features and capabilities as they are released.
The learning curve will not be as steep for your users. You will face less resistance to change within your organization. Users will have a much easier time adapting to the new features and interface changes with each new update, instead of having to contend with many significant changes at once.
Costs and downtime will be kept to a minimum. In the long run, skipping updates does not save you time or money. In fact, the longer you wait, the harder it becomes, as the migration path of the previous versions still has to be followed. Migrating with each update will ensure that the process is as smooth as possible every time
Microsoft will be able to offer better support and an improved platform to all users. Having all users on fewer versions will help Microsoft improve their features, performance and support, as their resources will be focused on a single platform.
New capabilities and third-party products can be tested ahead of time to avoid system disruptions. Partners and customers will be able to test capabilities and major updates in a sandbox environment in advance. Partners will be able to better prepare their products for general availability, while organizations can validate updates before the update release to avoid any disruption.
This continuous update deployment will make it important to follow best practices and use as many features out of the box as possible to ensure smooth migrations. That said, organizations will be able to accelerate their digital transformation in a consistent and predictable manner, overall increasing reliability and performance. For more information, please read the official announcement blog post by Microsoft.
As of September 1st only API 1.2 and above will be supported.
What does this mean for you as a custom visual developer?
The absolute majority of custom visuals are built on top of API version 1.2 and above, however, if you have an older version of your custom visual that was built using tools published earlier than September 2016, they may be using deprecated APIs.
You can verify the API version of your visual by looking at the pbiviz.json file at the "apiVersion" property.
What should you do?
In case you are using deprecated APIs, you may want to migrate to a supported API (1.2 and above) to avoid potential breaks for your visuals.
It is recommended to occasionally update your custom visual with latest tools and APIs to benefit from the latest and greatest features and enhancements.
As of the September Power BI release, we will issue a warning to report authors for each visual using a deprecated API in both Power BI desktop and web (edit mode) with the following statement:
“This visual is using an old and unsupported version of interface and will be deprecated soon. Please replace this visual with an alternative one from our in-house or marketplace visuals.”