• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: DevOps

Tagging In Azure DevOps

November 8, 2020   Microsoft Dynamics CRM

What is Tagging? Tagging has long been a term used to describe a kind of graffiti to mark public spaces. In Azure DevOps (or ADO), tagging is similar because it can serve as a colorful label for work items to visually distinguish one epic from another, isolate a critical user story from hundreds of others, and so on. The big difference here is that tags in ADO should be encouraged (though…

Source

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

10 DevOps strategies for working with legacy databases

November 1, 2020   BI News and Info
SimpleTalk 10 DevOps strategies for working with legacy databases

Database teams often maintain large and complex legacy databases that they must keep operational to support existing applications. The teams might have maintained the databases for years, or they might have inherited them through acquisitions or other circumstances. Whatever the reasons, they likely manage and update the databases in much the same way they always have, performing their tasks mostly segregated from other efforts. But this approach makes it difficult for organizations to fully embrace modern application methodologies such as DevOps, which rely on more agile and aggressive application delivery processes.

The traditional methods for maintaining legacy databases can slow application delivery, impact productivity, and increase overall costs. Some organizations are starting to incorporate databases into their DevOps processes, but it’s not always an easy goal to achieve. Application and database development have historically been much different, and synchronizing changes can be a significant challenge. Another serious consideration is data persistence, which plays a major role in database updates, but is of little importance to the applications themselves.

Despite these issues, many organizations have come to recognize the importance of incorporating databases into the DevOps model. By applying the same principles to databases as those used for delivering applications, they can better control the entire application development and deployment process, including the databases that go with them. For organizations ready to make the transition, this article offers 10 strategies for incorporating legacy databases into their DevOps pipelines.

1. Choose your database strategy wisely.

Before embarking on your database DevOps journey, give careful consideration to how you should proceed. Start by identifying your business objectives and long-term strategies. In some cases, it might not be worth investing the time and resources necessary to apply DevOps to a legacy database. For example, the database might support applications that will soon be phased out, in which case, you might need to focus on how to migrate the data, rather than continuing to support the legacy database.

For those databases that you’ll continue to support, keep the overall scope in mind. It’s better to take one small step at a time than try to bring every database into the DevOps fold all at once. You should be planning for the long-term and not trying to do everything overnight. You might start with a small database to see how the process goes and then move on from there, or you might deploy a new database so your database team can become familiar with DevOps principles before tackling the legacy databases.

2. Create a DevOps culture that includes the database team.

Discussions around DevOps inevitably point to the importance of creating a culture of collaboration and transparency, one that fosters open communications for all individuals involved in application delivery. DevOps team members should be encouraged to work together and share in the responsibility for application delivery, adopting a mindset in which everyone has a stake in the outcome.

Not that long ago, DevOps teams rarely included database team members, leaving them out of the discussion altogether. For the most part, databases were seen as completely separate entities from the applications they served. However, the only way to successfully incorporate legacy databases into the DevOps process is to ensure that the database team is as much a part of the DevOps effort as the development and operations teams. The better the communications between the database team and everyone else, the smoother the transition to database DevOps.

3. Get the right tools in place.

DevOps teams need the right tools to support the continuous integration/continuous delivery (CI/CD) pipeline common to DevOps deployments. For example, a team might use Chef for configuration management, Jenkins for build automation, Kubernetes for deploying containers, or NUnit for unit testing. The exact tools depend on an organization’s specific requirements and the type of applications they’re deploying. DevOps teams should select their tools carefully, taking into account the growing need to support database deployments.

Initially, DevOps solutions tended to leave databases out of the equation, but that’s been steadily changing. Many DevOps tools now accommodate databases, and many database tools now accommodate DevOps. For example, Redgate Deploy can help incorporate a database into the CI/CD pipeline and can integrate with any CI server that supports PowerShell. Redgate Deploy can also integrate with common CI and release tools, including Jenkins, GitHub, TeamCity, or Azure DevOps.

4. Prepare the database team for DevOps.

DevOps methodologies require special skills to ensure that applications are properly built, tested, and deployed to their various environments. If developers also implement infrastructure as code (IaC), DevOps teams require skills in this area as well. In some cases, an organization might need to bring on additional personnel or outside consultants. For many DevOps teams, however, all they need is enough training and education to get them up and running. This is just as true for the database team as anyone else.

For example, individuals on the database team might focus on specific areas, such as Oracle database development or SQL Server administration. To prepare them for a transition to database DevOps, they should be trained in DevOps methodologies, particularly as they apply to database deployments. In this way, they’ll be much better prepared for the transition and able to understand why specific steps need to be taken. They’ll also be able to more effectively communicate with other DevOps team members about CI/CD operations.

5. Document your databases and data.

When preparing your databases for the transition to DevOps, you should ensure that they’re fully documented so that everyone on the DevOps team can quickly understand how a database’s objects are used and how they impact the application as a whole. Anyone looking at the documentation should be able to understand the types of data being stored, how the data is related, and any special considerations that might apply to the data.

One approach to documentation is to use the features built into the database platform. For example, SQL Server supports extended properties, which can be leveraged to document individual objects. In this way, the data definitions are stored alongside the schema definitions, providing immediate access to the information when needed. Better still, the extended properties also get checked into source control when the schema files are checked in (assuming that source control is being used, as it certainly should be).

6. Get databases into version control.

Many database teams are already putting their database script files into source control for maintaining file versions, often along with lookup or seed data. The databases might not participate in DevOps processes, but the source files are still protected. If a database team is not using source control, they’ll need to start. Version control is an essential part of DevOps methodologies. It offers one source of truth for all code files, resulting in fewer errors or code conflicts. It also provides a record of all changes, and it makes it possible to roll back changes to a previous version.

Database teams should store all their database code in source control in a product like SQL Source Control, without exception. This includes the scripts used to build the database as well as change scripts for modifying the schema. It also includes scripts used for creating stored procedures and user-defined functions, as well as any data modification scripts. In addition, it’s a good idea to store static data into source control, such as that used for lookup data, as well as configuration files when applicable. Source control check-ins should also incorporate code reviews to reduce the possibility of errors.

7. Prepare for data migration.

When you update an application, you don’t have to worry about persisting data from one version to the next. It’s not so easy with databases. You can’t simply update or replace schema without considering the impact on data. Even a minor update can result in lost or truncated data. With any database update, you must take into account how your changes will impact the existing data, what steps you must take to preserve that data, and how to apply any new or modified data to the updated database.

When preparing to incorporate your databases into DevOps, you need to have in place a system for ensuring that any new database versions get the data they need. If you’re doing an in-place schema update, you might also need scripts for modifying and preserving data. If you’re re-creating the database from scratch, you need the scripts necessary to populate the tables. In either case, those scripts should be checked into source control along with the schema changes, so they’re included in the CI/CD build process.

8. Shift left in your thinking.

DevOps methodologies include the concept of shifting left, which refers to the idea of performing certain tasks earlier in the application lifecycle. For example, instead of waiting to build the application and test its code until late in the development cycle, building and testing become an ongoing process that takes place as soon as updated script files are checked into source control. Also important to this process is that code check-ins occur frequently and in smaller chunks. The shift-left approach makes it easier to address issues sooner in the development process when they’re far more manageable.

The shift-left strategy should also be employed when incorporating databases into the DevOps pipeline, with developers checking in their script files on a frequent and ongoing basis. In addition, they should create database-specific tests, such as ones that verify object structure or query results. That way, when a developer checks in code changes, the CI server builds the database and runs the tests so that each change checked into source control is immediately verified. This doesn’t preclude other types of testing later in the cycle, but it helps catch certain issues earlier in the process when they’re much easier to resolve

9. Automate database operations.

Automation is a key component of a successful DevOps operation. This applies to the core application as well as the database. Automation helps avoid repetitive tasks, reduces the potential for errors, and ensures consistent results with each deployment. To this end, DevOps teams must ensure that the build, test, and deployment operations incorporate the database script files along with the application files. This might mean acquiring new CI/CD tools that can better accommodate databases or reconfiguring existing one to handle the additional requirements.

The key, of course, is to ensure that the database developers are checking their files into source control and following a consistent, agreed-upon process for managing script files. For example, they should decide whether to take a state-based approach or migration-based approach to deploying database updates. Although it’s possible to employ both strategies, choosing one over the other makes it easier to collaborate on a single code base while avoiding the complexities of balancing both at the same time.

10. Perform ongoing monitoring.

Like automation, continuous monitoring is an essential component of a successful DevOps operation. This includes monitoring the systems themselves, as well as providing continual feedback at every stage of the DevOps process. Comprehensive monitoring ensures that everything is functioning properly, while helping to identify issues early in the development cycle. Not only does this make for better applications, but it also helps to improve the CI/CD process itself.

Most DBAs are already familiar with the importance of monitoring their production database systems for performance, security, and compliance issues. In all likelihood, they already have the tools in place for identifying issues and determining which ones need immediate attention. The continuous feedback loop represents a different type of monitoring. It provides database developers with visibility across the entire pipeline and alerts them to any problems with their script files during the integration, testing, and deployment phases. For this reason, you must ensure that your pipeline’s feedback loop takes into account the database script files at every phase of the operation.

Looking to the future

All databases have a limited lifespan, and it’s just a matter of time before your legacy databases must be overhauled or replaced with another system. For example, you might find that a document database such as MongoDB will better serve an application than a relational database. Many organizations are also looking for ways to better accommodate the persistent data requirements that go with containers, microservices, and cloud-based applications.

No matter what direction your organization is heading, chances are it’s moving toward a goal of faster release cycles and greater agility. Database teams are under increasing pressure to adopt this strategy and take a more flexible approach to database maintenance. Instead of thinking of databases as unmovable monolithic structures, they must learn to treat them as manageable code that can be tested, automated, and deployed just like application code. DevOps can make this transition a lot easier, while also preparing database teams for what might be coming in the future.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Monitoring the Power Platform: Azure DevOps – Orchestrating Deployments and Automating Release Notes

August 31, 2020   Microsoft Dynamics CRM

Summary

 

DevOps has become more and more ingrained into our Power Platform project lifecycle. Work item tracking and feedback tools for teamwork. Continuous integration and delivery for code changes and solution deployments. Automated testing for assurance, compliance and governance considerations. Microsoft’s tool, Azure DevOps provides native capabilities to plan, work, collaborate and deliver. Each step along the way in our Power Platform DevOps journey can be tracked and monitored which will be the primary objective of this article.

In this article, we will focus on integrating Azure DevOps with Microsoft Teams to help coordinate and collaborate during a deployment. We will explore the various bots and how to set them up. From there we will walk through a sample scenario involving multiple teams working together. Finally, we will look to automate release notes using web hooks and Azure Function.

Sources

 

Sources of Azure DevOps events that impact our delivery can come from virtually any area of the platform including work items, pipelines, source control, testing and artifact delivery. For each one of these events, such as completed work items, we can setup visualizations such as charts based on defined queries. Service hooks and notification subscriptions can be configured to allow real time reporting of events to external parties and systems allowing for us to stay in a state of continuous communication and collaboration.

AUTHOR NOTE: Click on each image to enlarge for detail.

3288.pastedimage1598622673004v1 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Microsoft Teams, Continuous Collaboration and Integration

 

Azure DevOps bots with Microsoft Teams has quickly grown into one of my favorite features. For instance, Azure DevOps dashboards and kanban boards can be added to channels for visualizations of progress as shown below.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Multiple Azure DevOps bots can be configured to deliver messages to and from Microsoft Teams to allow for continuous collaboration across multiple teams and channels. These bots can work with Azure Pipelines, work items and code pull requests.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Work Items Code Pipelines
6431.pastedimage1598622673006v4 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes 1145.pastedimage1598622673006v5 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes 5047.pastedimage1598622673007v6 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

For monitoring and orchestrating deployments across our various teams, the Azure Pipelines bot is essential. Let’s begin by setting up subscriptions to monitor a release pipeline.

NOTE: The rest of this document will be using a release pipeline as an example, but this will also work with multi-stage build pipelines that utilize environments.

Configuring the Azure Pipelines Bot in Microsoft Teams

 

Use the “subscriptions” keyword with the Azure Pipelines bot to review and modify existing subscriptions and add new ones.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

In the above example, we are subscribing to any changes in stages or approvals for a specific release pipeline. Its recommend to filter to a specific pipeline to reduce clutter in our Teams messaging. The Azure Pipeline bot, using actions described in the article “Azure DevOps – Notifications and Service Hooks“, can be further filtered by build statuses. This is helpful to isolate the messages delivered to a specific Teams channel.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Once configured, as soon as our pipeline begins to run, Microsoft Teams will begin to receive messages. Below is an example showing the deployment of a specific release including stages and approval requests. What I find nice about this is that Microsoft Teams works on both my mobile devices and even Linux based operating systems, allowing any team on any workload to utilize this approach.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

I also want to point out that Azure DevOps also has the ability to natively integrate with other 3rd party tools such as Slack (Similar to the Teams bots), ServiceNow and Jenkins.

Release Pipelines

 

Quality Deployments

 

Deployments within a release pipeline allow for numerous ways to integrate monitoring into Azure DevOps processes. Each deployment include pre and post conditions which can be leveraged to send events and metrics. For instance, the Azure Function gate can be used to invoke a micro service that writes to Azure Application Insights, creates ServiceNow tickets or even Kafka events. The possibilities are endless, imagine sending messages back to the Power Platform for each stage of a deployment!

Approvals

 

Pre and Post approvals can be added to each job in the release pipeline. Adding these can assist during a complex deployment requiring coordination between multiple teams dispersed geographically. Shown below is a hypothetical setup of multiple teams each with specific deliverables as part of a release.

7002.pastedimage1598622673008v10 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

In this scenario, a core solution needs to be deployed and installed before relying features can begin. When any of the steps in the delivery process begins, the originating team needs to be notified in case of any issues that come up.

Using approvals allows the lead of the specific feature team to align the resources and communicate to the broader team that the process can move forward. The full example can be found below.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Here is an example of an approval within Microsoft Teams, notifying the lead of the core solution team that the import process is ready. The approval request shows the build artifacts (e.g. solutions, code files, etc), the branch and pipeline information.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Deployment Gates

 

At the heart of a gated deployment approach is the ability to search for inconsistencies or negative signals to minimize unwanted impact further in the process. These gates, which can be set to run before or after a deployment job, allow us to query for potential issues and alerts. They also could be used to notify or perform an operation on an external system.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Queries and Alerts

 

Deployment gates provide the ability to run queries on work items within your Azure DevOps project. For instance this allows release coordinators and deployment managers to check for bugs reported from automated testing using RSAT for Dynamics 365 F&O or EasyRepro for Dynamics 365 CE. These queries are created within the Work Items area of Azure DevOps. From there they are referenced within the pipeline and based on the data returned, upper and lower thresholds can be set. If these thresholds are crossed, the gate condition is not successful and the process will halt until corrections are made.

External Integrations

 

As mentioned above Azure Function is natively integrated within deployment gates for Release Pipelines. These can be used for both a pre condition and post condition to report or integrate with external systems.

Deployment gates can also invoke REST API endpoints. This could be used within the Power Platform to query the CDS API or run Power Automate flows. An example could be to query the Common Data Service for running asynchronous jobs, creating activities within a Dynamics 365 environment or admin actions such as enabling Admin mode. Another could be to use the robust approval process built in Power Automate for pre and post approvals outside of the Azure DevOps licensed user base.

Using Build Pipelines or Release Pipelines

 

In the previous section I described how to introduce quality gates to a release securing each stage of the pipeline. Release pipelines are useful to help control and coordinate deployments. That said, environments and build pipelines allow for use of YAML templates which are flexible across both Azure DevOps and GitHub and allow for teams to treat pipelines like other source code.

Environments

 

Environments in Azure DevOps allow for targeted deployment of artifacts to a collection of resources. In the case of the Power Platform, this can be thought of a release to an Power Platform environment. The use of pipeline environments is optional, that is unless you begin work using Release pipelines which do require environments. Two of the main advantages of environments are deployment history and security and permissions.

Environment Security Checks

 

Environment security checks, as mentioned above, can provide quality gates similar to the current capabilities of Release Pipelines. Below is an example of the current options compared to Release Pre and Post Deployment Quality Gates.

3603.pastedimage1598622673009v14 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Here is an example of linking to a template in GitHub.

0876.pastedimage1598622673009v15 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Compare this to the Release Pipeline Pre or Post Deployment Quality Gates.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Scenario: Orchestrating a Release

 

Ochestrate%20Release%20with%20Teams%20 %20Full Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

In the above example, we have a multi-stage release pipeline that encompasses multiple teams from development to support to testing. The pipeline relies on multiple artifacts and code branches for importing and testing.

In this example, we have a core solution containing Dynamics 365 entity changes that are needed by integrations. They will need to lead the deployment and test and notify the subsequent teams that everything has passed and can move on.

Below is an example of coordination between the deployment team and the Core team lead.

Ochestrate%20Release%20with%20Teams Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Below is an image showing the entire release deployment with stages completed.

4760.pastedimage1598622673010v19 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Automating Release Notes

 

Azure Application Insights Release Annotations

 

The Azure Application Insights Release Annotations task is a marketplace extension from Microsoft allowing a release pipeline to signal an event in a release pipeline. An event could be the start of the pipeline, the end, or any event we are interested in. From here we can use native functionality of Azure Application Insights to stream metrics and logs.

Using an Azure Function with Web Hooks

 

Service Hooks are a great way of staying informed of events happening within Azure DevOps allowing you to be freed up to focus on other things. Examples include pushing notifications to your teams’ mobile devices, notifying team members on Microsoft Teams or even invoking Microsoft Power Automate flows.

2043.pastedimage1598622673010v20 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

The sample code for generating Azure DevOps release notes using an Azure Function can be found here.

Next Steps

 

In this article we have worked with Azure DevOps and Microsoft Teams to show an scenario to collaborate on a deployment. Using the SDK or REST API, Azure DevOps can be explored in detail, allowing us to reimagine how we consume and work with the service. This will help with automating release notes and inviting feedback from stakeholders.

Previously we looked at setting up notifications and web hooks to popular services. We then reviewed the Azure DevOps REST API to better understand build pipelines and environments.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index

 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

A Practical Guide To DevOps For ERP

June 7, 2020   BI News and Info
 A Practical Guide To DevOps For ERP

Part of the “DevOps for ERP” series

Whether or not you believe in the value that DevOps can offer to a business – and there’s already plenty of evidence to show that it can deliver major benefits – there’s no doubt that more and more companies are starting to wonder why they haven’t extended this approach to their ERP systems.

Not so long ago, I regularly had to explain what agile and DevOps meant, but nowadays, people come to us asking how we can help them adopt these approaches.

So why the change? Transformation is the key. It’s a word that’s a bit overused by my colleagues in the marketing world, in my opinion. But with the move to cloud, the constant emergence of new technologies, and growing pressure on businesses to innovate and increase competitiveness, real changes are happening that IT teams simply have to respond to.

Perhaps unlike in years gone by, ERP teams are not immune to this trend. As “systems of engagement” like websites and mobile apps change faster than ever, the “systems of record” that often power them need to keep pace. Otherwise the whole business slows down.

Unfortunately, the ERP development processes most people have been familiar with throughout their careers – the “waterfall” method most often still in use today ­– tend to suffer from a slow pace of change. This can be explained by the concern that changing things in ERP systems has traditionally come with a high chance of failure (an unacceptable outcome for business-critical systems).

DevOps, on the other hand, supports application delivery in shorter, more frequent cycles where quality is embedded from the start of the process, and risk is substantially reduced.

Great, I hear you say; let’s do it! However, even the most enthusiastic organizations cannot implement DevOps in ERP systems in exactly the same way as they’ve done for other applications. The fundamental requirements for DevOps are the same – I covered some of them here­ – but the practicalities are different, not least because standard DevOps tools aren’t capable of doing the job. What’s more, the DevOps experts don’t necessarily understand what’s needed in ERP, while the ERP experts may never have heard of DevOps!

What is the practical reality if companies do adopt DevOps for ERP?

Higher-quality development

Delivering software at high speed requires a robust development process that combines clear business requirements and constant feedback. DevOps mandates that ownership of quality “shifts left” and is embedded from the very start of the process. This way, most (and ideally all) problems can be identified long before they get to live production systems (where the disruption caused and associated cost to fix are much greater).

In practice, this means we need to ensure that nothing leaves development without being fully quality-checked. Working practices like daily stand-up sessions, mandatory peer reviews of code, and a set of universal coding standards might not seem revolutionary for some IT teams, but they are new ideas for many ERP professionals. They’re only part of the solution, though, going along with technical elements like automated unit testing and templated lock-down of high-risk objects.

One other practical outcome of DevOps from the very first stage of development is that ERP and business teams must be more closely aligned to ensure that customer requirements are clearly understood. Integration between the development team and other IT functions like QA and operations also establishes an early validation step.

Low-risk, high cadence delivery

Continuous integration is an aspect of DevOps that means that – unlike in many ERP landscapes – changes can be successfully deployed to QA or any other downstream system at any time without risk. The big change here is the ability to deploy based on business priorities, rather than just having to wait for the next release window.

Automation gives you the means to achieve this new high-frequency delivery cadence in ERP systems by providing a way to better manage risk (spreadsheets definitely do not form a core part of a DevOps-based software delivery process!). It enables you to check every change for technical issues like completeness, sequencing, dependencies, risk, and impact and more, ensuring that nothing is promoted prematurely.

This more rigorous, agile approach means QA teams, in particular, can focus their attention on real issues rather than technical “noise,” which accelerates the delivery of functionality that business users or customers are waiting for. Changes can be selectively and automatically deployed with confidence, rather than waiting for the next full release.

Minimal production impact

“Stability is king” has long been an unofficial mantra in ERP environments, given their importance to day-to-day business operations. With DevOps, the required system stability is maintained even though live production systems can be updated far more often. Rigorous controls – built on both technical solutions and new collaborative workflows – ensure that deployments are safely delivered to end users as soon as possible.

But there is always a risk, however small, that a change to live ERP systems can cause problems that stop the business. That’s why Mean Time To Recover (as opposed to the more traditional Mean Time To Failure) is a key DevOps metric. The most effective ERP DevOps processes feature a back-out plan that allows changes to be reversed as quickly as possible so, even if disaster strikes, the impact of change-related downtime is minimal, and business continuity can be maintained.

The culture question

As I’ve explained, when implemented correctly, DevOps fundamentally changes traditional ERP development processes. However, the manner in which DevOps impacts the roles and approach of staff can be just as important. In DevOps, effective collaboration is key. Traditional silos based on job function are replaced by multi-skilled, cross-functional teams that work together to deliver agreed-upon business outcomes. This may require a significant shift in how teams are organized.

It’s normal for some people to find this new way of working challenging, but creating a successful DevOps culture empowers team members to take responsibility at every stage of the development lifecycle. It enables them to collaborate with their colleagues and focus on a common goal of rapidly delivering the high-quality features and functionality the business needs to remain competitive.

DevOps benefits and outcomes

Change happens fast, and companies need to respond quickly. IT systems must, therefore, have the flexibility to rapidly change, expand, extend, and adapt.

But accelerating delivery cannot be done at the expense of business continuity. Successfully adopting DevOps for ERP combines speed, quality improvements, and risk reduction. That provides the flexibility to change ERP environments at the speed the business needs with confidence that it can be achieved without compromising stability.

For more on this topic, please read “How to Build a Business Case for DevOps” and “Self-Assessment: Are You Already Doing ERP DevOps?”

For a practical guide on how to introduce DevOps to your ERP software development and delivery processes, download our e-book.

A version of this article originally appeared on the Basis Technologies blog. This adapted version is republished by permission. Basis Technologies is an SAP silver partner.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

CRM Software Revolution with DevOps

June 2, 2020   Microsoft Dynamics CRM
crmnav CRM Software Revolution with DevOps

In today’s uncertain market, businesses focus on the agility to become more proactive and reactive towards market changes, new and innovative competitors, and demanding customer preferences. Due to these very challenging business conditions, software development is turning towards agile and Scrum methods for its survival and growth. However, IT operational processes are still unable to deliver similar results. 

DevOps is a platform that emphasizes refining collaboration amongst the operations and development departments of an organization. DevOps uses several technologies to safeguard a dynamic set-up from a life cycle viewpoint. It is a response to sustain business agility when it comes to IT operations. With a focused ideology, organizations can enable continuous software delivery and join software management and creation together by removing silos. DevOps controls the end-to-end software development lifecycle (SDLC) and focuses on the delivery of software-driven innovation.  

It aims to speed up time to market for new applications and helps enhance existing applications. Furthermore, it also serves three essential functions for businesses:  

  1. It promotes communication and collaboration between developers, IT operational experts, and QA personnel. 
  2. It automates software delivery and structural variations. 
  3. It helps create an agile platform that is faster, smoother, and streamlined. 

If we look back into the history of DevOps, we see that it surfaced as a response to Agile software expansion approaches. It was more of a late reply to cloud computing. On the other hand, if we investigate the progress of cloud CRM, we see that it has facilitated change and innovation over time. Cloud CRM has reduced rigid methodological constructs and fuelled innovation. An excellent example of this is the concept of a Scrum, which improves the performance frequency of incremental software releases. 

In recent years, CRM software deliveries have increased. This has resulted in a direct impact on the release management process. The operational lifecycle is suffering from a bottleneck as it is not compatible with the increased load of deliveries. 

Facing difficulties with CRM? Get a consultation 

DevOps and the Agile Software Development Process 

For coders, DevOps is pretty much what QA and IT is to the company itself. Perhaps, it is safe to conclude that DevOps and Agile have a synergetic relationship. To explore this relationship, we will investigate the many similarities that Agile and DevOps share: 

  • Both agile and DevOps fuel teamwork amongst versatile and independent teams. The roles may differ, but the key ideology is the same. 
  • Both agile and DevOps promote reiterative and adaptive application approaches and focus on transparency, review, and adaptation. This allows these methodologies to be quality-focused, risk cautious outcomes-oriented.
  • Agile endorses the continual delivery of progressive software releases. DevOps follows a similar approach when it comes to software operations. 

With time, agile methods have created downstream challenges for companies. DevOps offers a solution to these very challenges by streamlining the software creation procedure and allowing for a much faster time to market rate than ever before. DevOps will enable customers to enjoy new software capabilities without process delays. Happy customers are the primary goal of every organization, so that is a big plus! 

However, if an organization fails to implement a holistic software ops process to manage agile’s augmented pace, it will suffer much. Problems like a stubborn release method increased errors, and delayed software distribution will occur if an organization is unable to implement software ops properly.  

Are there any SLDC challenges you want to overcome? Contact us Today 

Scrum and the DevOps Framework 

To implement the principals of DevOps in an organization, one needs to follow a systematic approach. Unlike Agile, which has many mature frameworks such as Scrum, Kanban, XP, etc., DevOps does not have a similar design for implementation. Experts have even gone far enough to claim that DevOps can be looked at as a cohesive extension of the agile framework.  

The most popular agile methodology of all time has been the Scrum. It has four basic components: roles, events, artifacts, and rules. There are a few simple procedures that allow these components to be naturally extended into the DevOps ideology. We will go over them one by one. 

  • Roles:  

The Scrum has three roles that are divided amongst the scrum team: The Product Owner, the ScrumMaster, and the Development Team. DevOps roles can be handled similarly. The team will include QA experts, IT experts, and developers. In most cases, there will be an overlap in the responsibilities assigned to the roles. An essential goal of DevOps is to eliminate the siloes and create integrated software management and developing personnel. Like the Scrum method, the DevOps team should also be independent and goal-oriented. This means that they will have to work with people from different departments to achieve a common end result. 

  • Events/Ceremonies:  

Scrum events or ceremonies are based on sprints. A sprint entails four different scrum ceremonies: sprint planning, daily Scrum, sprint review, and sprint retrospective. The same event-based approach can be incorporated into DevOps to reap Scrum benefits such as higher productivity due to improved collaboration and communication, better-quality products, improved team dynamics, and reduced time to market. 

  • Artifacts:  

There are three primary artifacts in the Scrum practices: The Product Backlog, the Sprint Backlog, and the Product Increment. These can be implemented in DevOps, for example, through studying the sprint backlog to plan upcoming releases or to evaluate increments to classify variations to the current software ops. 

  • Tools:  

Unlike Agile tools, which are generally limited to apps like Jira and Rally, DevOps tools contain a more evolved toolbox known as the toolchain. The toolchain supports numerous operational requirements making it far more sophisticated than the Agile tools. 

There are many ways of incorporating toolchains into grouped categories of release, configuration, and operations management. However, for companies that want to implement a toolchain hierarchy in line with ERP systems such as NetSuite or CRM software applications such as Dynamics 365, the following tools may be more beneficial: 

  • Code Tools aid in software development and integration and often assist in the transition towards DevOps. They consist of both integrated development environments and tools specific to applications such as xRM for Dynamics CRM. 
  • Build Tools are used to migrate solution builds between staged environments, for example, migrating Dev to QA. Build tools deal with the source code, code merging, version control, and automated builds. CRM software publishers often advance their own build tools. A good example of this is Lifecycle Services by Microsoft that helps with build management Dynamics 365. 
  • Test Tools are used to determine quality, test security, evaluate performance and scalability of the software.  
  • Release Tools allow continuous integration and deployment of the software. They are utilized for release approvals and automation. They also come in handy in change management scenarios.  

On a conclusive note, I would like to add that DevOps integration is highly beneficial and quite necessary for CRM software innovation. A solution provider that can amalgamate DevOps into solutions that it offers, be it CRM or any other practice, will allow enterprises to have a competitive edge through efficient and streamlined internal operations.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

A Practical Guide To DevOps For ERP

May 31, 2020   BI News and Info
 A Practical Guide To DevOps For ERP

Part of the “DevOps for ERP” series

Whether or not you believe in the value that DevOps can offer to a business – and there’s already plenty of evidence to show that it can deliver major benefits – there’s no doubt that more and more companies are starting to wonder why they haven’t extended this approach to their ERP systems.

Not so long ago, I regularly had to explain what agile and DevOps meant, but nowadays, people come to us asking how we can help them adopt these approaches.

So why the change? Transformation is the key. It’s a word that’s a bit overused by my colleagues in the marketing world, in my opinion. But with the move to cloud, the constant emergence of new technologies, and growing pressure on businesses to innovate and increase competitiveness, real changes are happening that IT teams simply have to respond to.

Perhaps unlike in years gone by, ERP teams are not immune to this trend. As “systems of engagement” like websites and mobile apps change faster than ever, the “systems of record” that often power them need to keep pace. Otherwise the whole business slows down.

Unfortunately, the ERP development processes most people have been familiar with throughout their careers – the “waterfall” method most often still in use today ­– tend to suffer from a slow pace of change. This can be explained by the concern that changing things in ERP systems has traditionally come with a high chance of failure (an unacceptable outcome for business-critical systems).

DevOps, on the other hand, supports application delivery in shorter, more frequent cycles where quality is embedded from the start of the process, and risk is substantially reduced.

Great, I hear you say; let’s do it! However, even the most enthusiastic organizations cannot implement DevOps in ERP systems in exactly the same way as they’ve done for other applications. The fundamental requirements for DevOps are the same – I covered some of them here­ – but the practicalities are different, not least because standard DevOps tools aren’t capable of doing the job. What’s more, the DevOps experts don’t necessarily understand what’s needed in ERP, while the ERP experts may never have heard of DevOps!

What is the practical reality if companies do adopt DevOps for ERP?

Higher-quality development

Delivering software at high speed requires a robust development process that combines clear business requirements and constant feedback. DevOps mandates that ownership of quality “shifts left” and is embedded from the very start of the process. This way, most (and ideally all) problems can be identified long before they get to live production systems (where the disruption caused and associated cost to fix are much greater).

In practice, this means we need to ensure that nothing leaves development without being fully quality-checked. Working practices like daily stand-up sessions, mandatory peer reviews of code, and a set of universal coding standards might not seem revolutionary for some IT teams, but they are new ideas for many ERP professionals. They’re only part of the solution, though, going along with technical elements like automated unit testing and templated lock-down of high-risk objects.

One other practical outcome of DevOps from the very first stage of development is that ERP and business teams must be more closely aligned to ensure that customer requirements are clearly understood. Integration between the development team and other IT functions like QA and operations also establishes an early validation step.

Low-risk, high cadence delivery

Continuous integration is an aspect of DevOps that means that – unlike in many ERP landscapes – changes can be successfully deployed to QA or any other downstream system at any time without risk. The big change here is the ability to deploy based on business priorities, rather than just having to wait for the next release window.

Automation gives you the means to achieve this new high-frequency delivery cadence in ERP systems by providing a way to better manage risk (spreadsheets definitely do not form a core part of a DevOps-based software delivery process!). It enables you to check every change for technical issues like completeness, sequencing, dependencies, risk, and impact and more, ensuring that nothing is promoted prematurely.

This more rigorous, agile approach means QA teams, in particular, can focus their attention on real issues rather than technical “noise,” which accelerates the delivery of functionality that business users or customers are waiting for. Changes can be selectively and automatically deployed with confidence, rather than waiting for the next full release.

Minimal production impact

“Stability is king” has long been an unofficial mantra in ERP environments, given their importance to day-to-day business operations. With DevOps, the required system stability is maintained even though live production systems can be updated far more often. Rigorous controls – built on both technical solutions and new collaborative workflows – ensure that deployments are safely delivered to end users as soon as possible.

But there is always a risk, however small, that a change to live ERP systems can cause problems that stop the business. That’s why Mean Time To Recover (as opposed to the more traditional Mean Time To Failure) is a key DevOps metric. The most effective ERP DevOps processes feature a back-out plan that allows changes to be reversed as quickly as possible so, even if disaster strikes, the impact of change-related downtime is minimal, and business continuity can be maintained.

The culture question

As I’ve explained, when implemented correctly, DevOps fundamentally changes traditional ERP development processes. However, the manner in which DevOps impacts the roles and approach of staff can be just as important. In DevOps, effective collaboration is key. Traditional silos based on job function are replaced by multi-skilled, cross-functional teams that work together to deliver agreed-upon business outcomes. This may require a significant shift in how teams are organized.

It’s normal for some people to find this new way of working challenging, but creating a successful DevOps culture empowers team members to take responsibility at every stage of the development lifecycle. It enables them to collaborate with their colleagues and focus on a common goal of rapidly delivering the high-quality features and functionality the business needs to remain competitive.

DevOps benefits and outcomes

Change happens fast, and companies need to respond quickly. IT systems must, therefore, have the flexibility to rapidly change, expand, extend, and adapt.

But accelerating delivery cannot be done at the expense of business continuity. Successfully adopting DevOps for ERP combines speed, quality improvements, and risk reduction. That provides the flexibility to change ERP environments at the speed the business needs with confidence that it can be achieved without compromising stability.

For more on this topic, please read “How to Build a Business Case for DevOps” and “Self-Assessment: Are You Already Doing ERP DevOps?”

For a practical guide on how to introduce DevOps to your ERP software development and delivery processes, download our e-book.

A version of this article originally appeared on the Basis Technologies blog. This adapted version is republished by permission. Basis Technologies is an SAP silver partner.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

A Peak into DevOps with Dynamics 365

March 31, 2020   Microsoft Dynamics CRM
crmnav A Peak into DevOps with Dynamics 365

This blog aims to discuss three essential tools of Dynamics 365 Customer Engagement / CRM and the functions they serve in the DevOps process. However, before we introduce these three magic tools, we are first going to understand what DevOps and Dynamics CRM are, independently and together.

What is DevOps?

The term DevOps was coined more than a decade ago by Patrick Debois. He combined two business terms, “development” and “operations,” to come up with this widely acknowledged business philosophy. DevOps represents an ideology that believes in bringing about change in the IT industry by adopting agile and lean processes in the “context of a system-oriented approach” – as Gartner puts it.

In layman’s terms, DevOps is an ideology that focuses on improving collaboration between the operations and development teams. DevOps uses various technologies to ensure a dynamic infrastructure from a life cycle perspective. The DevOps umbrella spreads over processes, ideologies, and culture, along with a mindset that aims to shorten the overall software development life cycle (SDLC). By incorporating multiple automation features, DevOps tries to speed up operation and development processes like feedback, fixes, and updates, ensuring that a culture of efficiency is prevailing.

How Does DevOps work?

DevOps is described as a culture, and like all cultures, it has adopted many practices within itself. However, like all cultures, there are a few “norms” that remain stagnant. The following capabilities are native to all DevOps cultures adopted:

  • Collaboration
  • Automation
  • Continuous Integration
  • Continuous Delivery
  • Continuous Testing
  • Continuous Monitoring
  • Rapid Remediation

The basic idea that DevOps has promoted is to employ agile functionality in the people first, then in the process, and then in the product. For the purpose of this blog, we are mostly focusing on the process element of DevOps and seeing how Dynamics 365 CRM tools are helping empower the process of DevOps further.

What is Dynamics CRM?

CRM basically stands for customer relationship management. Dynamics is Microsoft’s Business Applications product line of ERP and CRM software. The aim of this software is to improve interactions between customers and clients by ensuring that users can track sales leads, investigate marketing activities and agendas, and focus on actionable data using one robust platform.

Dynamics 365 is a customizable solution that is designed to meet business needs. Companies can opt to select a stand-alone application that fits their needs or opt for multiple CRM tools that work as a powerful unified solution.

Why go with a CRM solution?

Employing a CRM software in your company will do wonders for your profitability by streamlining administrative processes in each department. CRM software ensures that you have a positive relationship with your clientele. An integrated CRM software helps focus on growth opportunities, increasing revenue, and retaining customers quickly.

  • Enhanced Marketing Strategies
  • Workflow Productivity
  • Efficient Data Analytics
  • Unified View of all Operations
  • Positive Customer Relationships
  • Ease of access
  • Streamlined Invoicing
  • Scalable CRM Solutions

Dynamics CRM and DevOps

For a software development process to be considered successful or agile, it is essential to ensure that it has a short development lifecycle and has a faster time to market ratio. DevOps adoption allows companies to achieve an effective and agile software procedure.

Dynamics CRM, despite its many benefits, comes with specific challenges, particularly when it comes to consistency in integration and deployment (CI/CD process). There are many companies that are utilizing DevOps for CRM to enhance their agile cycle velocity. This helps them ensure maximum value is delivered through new CRM features.

The Three Microsoft SDK Tools:

The tools that I will be discussing today are prepackaged with Microsoft Dynamics Software Development Kit, aka Microsoft SDK. These tools are:

  1. SolutionPackager.exe
  • This tool is used to deconstruct a Dynamics 365 compressed solution file (reversibly) into multiple XML files and other files so that a source control system can manage the data.
  • A CRM solution is compressed into multiple files when we export it. However, each file is crammed with a lot of content. For tracking purposes, we need to keep our records well-structured under source control.
  1. Configuration Migration (DataMigrationUtility.exe)
  • This tool is used to move configuration data and user data between Dynamics 365 instances and organizations.
  • Please Note: Configuration data is not the same as end-user data.
  • The configuration migration tool also uses the data’s GUID. Some people often confuse GUIDs as being the same across various CRM instances. This is not the case, and it is vital to keep a note of this.
  • For example, if you have a workflow that refers to some data, it is using GUID of that very data. On the other hand, if you have the same data in two CRM instances, the GUID will not be the same. Hence, they will not work.
  1. PackageDeployer.exe
  • This tool, as the name suggests, is simply for deploying packages in Dynamics 365.
  • For those of you, who do not know what a package is the following information will be helpful:
    • A package may be comprised of one or several Dynamics 365 solution files.
    • It contains CSV files or exported configuration files that are extracted from the Configuration Migration tool.
    • It has a custom code that can run prior to the package being deployed, during deployment, and even after it.
    • An HTML page that can be displayed both at the end of the beginning of the process

All these tools have a significant impact on agility and functionality. It increases process efficiency and allows for an efficient procedure life cycle that is time and cost-efficient.

There we have it. The three magic tools that help organizations implement DevOps in their processes through agile functionality and automation. It is important to note that this is one single side of DevOps and Dynamics 365 that we have explored. There is a lot more to these concepts, both independently and together, that we are going to explore further with time.

Conclusively incorporating a DevOps culture in company practices, procedures, and norms will help the company maximize its potential and reap gains to its fullest.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Key Findings from the 2020 Database DevOps Survey

February 6, 2020   BI News and Info
SimpleTalk Key Findings from the 2020 Database DevOps Survey

Each year Redgate Software runs a survey to learn more about how organizations practice DevOps, especially when it relates to the database. This year, over 2000 individuals responded, and they are from diverse industries and company sizes.

The key findings in the report are:

  • Frequent database deployments are increasing: 49% of respondents now report they deploy database changes to production weekly or more frequently.
  • Frequent deployers who use version control report lower production defect rates.
  • Respondents who report that it is easy to get a code review for database changes report lower production defect rates and lower lead time for change deployment.
  • Although 38% of responders report the use of Change Approval Boards, we see no evidence that Change Approval Boards lower code defect rates, only that they increase lead time for changes.
  • Respondents who reported that all or nearly all their database deployments take place with the system online also reported lower lead time for changes and lower defect rates.
  • 60% of Enterprise respondents believe the move from traditional database development to a fully automated process for deployment can be achieved in a year or less. 66% in non-Enterprises believe this can be accomplished.

As I read through the report, one thing stood out to me: organizations who deploy more frequently have lower defect rates. In fact, “37% of those who have adopted DevOps across all projects report that 1% or less of their deployments introduce code defects which require hotfixes, compared to 30% for all other groups.”

When there is a defect in software, it’s usually easy to rollback changes. Maybe services must be reconfigured, or files replaced. Deployment issues with databases are much more critical and difficult to resolve. Typically, you can’t just do a restore of production because of data loss, and the database will not be available during the restore.

Instead of massive changes every few weeks or months, small changes are deployed frequently with DevOps. Frequent database changes do sound intimidating, but because these are small changes, there is less of an impact on stability. And that reported decrease in defect rates from 30% to 1% is impressive!

Even if your company is not “all in” yet, there are things you can do. Make sure that you are using source control to keep track of database changes. Begin automating database deployments to dev and other non-production environments. Make an effort to communicate with other teams to break down those silos.

There’s a lot to learn from the report, and I hope you take a look and consider participating in next year’s survey.

Commentary Competition

Enjoyed the topic? Have a relevant anecdote? Disagree with the author? Leave your two cents on this post in the comments below, and our favourite response will win a $ 50 Amazon gift card. The competition closes two weeks from the date of publication, and the winner will be announced in the next Simple Talk newsletter.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Learn how to connect Azure DevOps API in a secure way

December 6, 2019   Self-Service BI
social default image Learn how to connect Azure DevOps API in a secure way

Join Microsoft Data Platform MVP Gaston Cruz  for a great webinar!

In this session Gaston Cruz is going to cover how to connect to Azure DevOps API in a secure way. His example will show us how to extract metrics of a development team and how to get crucial reporting details of daily basis work and deployments.

Of course it’s going to be great to share some of the reports that we can get connecting Power BI Desktop to data flows (using parameters, functions to populate entities)

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

Introduction to DevOps: 10 Guidelines for Implementing DevOps

November 19, 2019   BI News and Info
SimpleTalk Introduction to DevOps: 10 Guidelines for Implementing DevOps

The series so far:

  1. Introduction to DevOps: The Evolving World of Application Delivery
  2. Introduction to DevOps: The Application Delivery Pipeline
  3. Introduction to DevOps: DevOps and the Database
  4. Introduction to DevOps: Security, Privacy, and Compliance
  5. Introduction to DevOps: Database Delivery
  6. Introduction to DevOps: 10 Guidelines for Implementing DevOps

Throughout this series, I’ve touched upon various aspects of the DevOps application delivery model, covering such topics as the delivery pipeline, security as an integrated process, and the importance of incorporating database deployments into application delivery. This article ties up the series by providing ten important guidelines for implementing DevOps in your data center. Although each DevOps environment is unique to an organization—addressing its specific needs and circumstances—these guidelines can offer a starting point for planning and implementing your DevOps strategy, discussing many of the considerations to take into account along the way.

1. Build an effective DevOps culture.

You might be tempted to view this first guideline as nothing more than introductory fluff to round out the article, and certainly many teams implement DevOps before fully appreciating the value of an effective culture. Failure to establish and encourage the right culture will inevitably translate to an excruciating and inefficient DevOps process or, worse still, a complete operations meltdown.

Implementing DevOps in any organization requires a shift in thinking, a new mindset that values communication and collaboration over inflexible roles and siloed teams that can’t see beyond their own borders. Those who participate in the DevOps process must be willing to work together and recognize that they’re accountable for application delivery from beginning to end and that they have a stake in the outcome.

To establish such a culture, you need a firm commitment from management and other leadership that clearly demonstrates a willingness to dedicate the time and resources necessary to establish and encourage transparent communications, information sharing, cross-team collaboration, and a general attitude that application delivery is everyone’s responsibility.

2. Take baby steps when getting started.

DevOps is not an all-or-nothing proposition. You do not need to revamp your entire operation overnight. Forget about the massive implementation scheme and instead take small steps toward achieving your goals. A DevOps implementation requires thorough planning and careful rollout while taking into account business needs that can evolve over time. For this, you need plenty of leeway.

You should be thinking long-term toward full implementation and not try to accomplish everything in a couple of months, especially for a large-scale operation. Rushing a DevOps implementation can be as big a mistake as failing to establish the right culture. You need time to assess requirements, train participants, choose the right tools, and deploy the infrastructure. Trying to implement DevOps before you’re prepared can result in buggy applications, compromised data, and a lot of wasted time and money.

When planning your DevOps implementation, it’s better to start small than risk the entire operation. For example, you don’t need to automate every task at once or move all your applications to DevOps at the same time. You can start by automating one or two processes or by developing smaller, less critical apps. After you’ve succeeded with one phase, you can then move on to the next.

3. Plan and document your development projects.

For any DevOps project, your development process should start with a thorough planning phase to ensure that development efforts run efficiently, remain on schedule, and come in under budget. All teams involved with application delivery—including development, testing, and operations—should participate in the planning process.

As part of this process, you should set realistic milestones, taking into account the time necessary to implement new tools and infrastructure and to allow those who are new to DevOps to adjust to a different way of working. Development efforts should focus on small, incremental changes, with more frequent release cycles. This approach can lead to releases that are more reliable and predictable while helping to avoid issues that can complicate and disrupt the application delivery process.

In addition, DevOps teams should document all necessary information throughout the planning and application delivery processes. Proper documentation is crucial to establishing a culture of collaboration and communication. It helps team members understand the systems, what has changed, what caused specific issues, and how to resolve those issues. Detailed documentation can also help improve subsequent release cycles for the current project and better streamline operations for future projects.

4. Take a security-first approach to DevOps.

Security, compliance, and privacy should be factored into your DevOps processes from the beginning and continue through all phases of application delivery, whether planning your applications, setting up infrastructure, writing code, or deploying to production. Security should not be treated as an afterthought, or a segregated phase squeezed into the application delivery process right before deployment, or worse still after the application goes live. Security must be integrated into all phases, implemented continuously, and treated as a priority by all team members.

As with application development, automated testing can be integral to ensuring that data remains secure and protected, with checks performed during each release cycle. If a test exposes a potential issue, it can be tagged for a security review, and the issue addressed quickly by developers before that application has an opportunity to find its way into production. In addition, peer code reviews should look for potential security and compliance issues, along with application-specific concerns.

DevOps teams should also take the steps necessary to secure the DevOps environment and processes themselves, such as storing all code in a secure source control repository, isolating the DevOps systems in a secure network, verifying all third-party code, or adhering to the principles of least privilege. There are, in fact, several best practices an organization should follow to implement continuous DevOps security, and teams must commit to ensuring those practices are always being followed.

5. Implement a code management strategy.

All DevOps teams should be using a source control solution that versions and protects files. The solution should provide a single source of truth for all files and ensure that they have a record of who changed what while providing easy access to any specific version of the application. But version control alone is not enough. You must also ensure that you check in all relevant files—not only application code, but also configuration and change scripts, test scripts, database deployment scripts, reference data, and any other files relevant to application delivery.

A code management strategy should also address how to handle branching, which lets developers work on different features simultaneously, without stepping all over each other. You’ll have to determine the best approach to branching based on your organization’s requirements. Just be sure that the source control solution you choose contains branching tools that are robust and sophisticated enough to ensure that branching works in your favor, rather than complicating operations.

You should also take into account other considerations when planning your code management strategy. For example, you might want to implement a policy that requires developers to check in their code at least once a day, or more often if necessary. Infrequent check-ins can complicate operations unnecessarily and slow down the development effort. Also, be sure that source code and other critical files are not being managed directly on local workstations or network shares. Again, everything should be in source control.

6. Automate, automate, automate.

For DevOps to be effective, you must automate as many operations as possible and practical. The more operations that can be automated—especially the mundane, time-consuming, repetitive ones—the more efficient the overall process and the fewer the risks. Where possible, avoid manual one-off tasks in favor of repeatable operations that can be automated (keeping in mind the second guideline about taking baby steps).

You won’t be able to eliminate all manual processes, but try to make them the exception, rather than the rule. Manual processes can slow development efforts or bring them to a halt, such as when a developer stops working on code to set up a virtual machine or implement a database environment.

Most standardized DevOps operations can now be automated, helping to improve efficiency and speed up application delivery. An automated operation can be used multiple times by multiple team members in multiple circumstances. In addition, the operation can be altered and improved to address changing business requirements. Test automation is a good example of this. You can write unit tests that kick off automatically when updated code is checked into source control. The tests can run as often as necessary, and they can be updated as needed to accommodate new requirements.

7. Think continuous everything.

In the world of DevOps, application delivery is not a one-time operation but rather an ongoing process that makes it possible to continuously update and improve the application until it reaches the end of its lifecycle. As with automation, continuous practices must be ongoing and deeply integrated into the DevOps environment. DevOps is not so much a linear process as it is a continuous flow of consecutive iterations that last until the application is no longer being actively developed or maintained.

Discussions about the continuous nature of DevOps often focus on continuous integration, delivery, and deployment because of the pivotal roles they play in defining the DevOps pipeline. Continuous integration, for example, makes it possible for developers to check in frequent code changes and know that those changes are automatically verified so they can be immediately incorporated into the codebase.

But continuous integration does not operate in a vacuum. It goes hand-in-hand with continuous testing, which validates the code by running automated tests that have been predefined to look for specific issues, making it possible to identify problems early in the development process, when they’re much easier to address. Also important to continuous integration—and the DevOps environment in general—are continuous security, monitoring, and feedback, which ensure that projects stay on track, DevOps processes work efficiently, and sensitive data is not put at risk.

8. Make quality assurance a priority.

One of the foundations of effective QA is automated, continuous testing that’s integrated into the DevOps pipeline. The software should be tested at each phase of the application development process, with development and testing done in tandem, beginning with the first code check-in. In addition, the testing strategy should incorporate the environment in which the application will be running, such as verifying that the correct software versions are installed or that environmental variables are properly configured.

Other factors critical to effective QA are continuous monitoring and feedback. You must be able to track the application’s health in order to identify any issues with the application or the environment in which it runs, even as the application scales. You should also be tracking the DevOps infrastructure itself in order to optimize application delivery and alert the team to any issues in the delivery process.

DevOps teams should consider using key performance indicators (KPIs) that measure such metrics as failure rates, time to resolution, completed operations, incomplete tasks, milestones accomplished, or any other factors that can help the team understand and improve operations and the application. Teams might also consider automatic dashboards that provide real-time insights into the environment and development efforts. When planning your DevOps infrastructure and projects, be sure you include QA experts to ensure that no aspect of QA is being overlooked.

9. Manage your environments.

Application planning must take into account the environments in which the application will be developed, tested, staged, and deployed to production. For example, you’ll likely need separate environments for developing an application so developers can work on different parts of the application without conflicting with each other. The same goes for testing. Different environments are usually needed to ensure the accuracy of the testing processes. Then there are the environments needed for staging and deploying the application, which can vary depending on the deployment model.

To address environmental requirements, many DevOps teams are now implementing infrastructure as code (IaC), a process of automating infrastructure creation. To incorporate IaC into your DevOps processes, you start by writing scripts that define the infrastructure necessary to support your application. For example, a script might provision a virtual machine, configure its operating system, install a database management system, apply security updates, and carry out several other operations.

With IaC, the application code and configuration scripts are linked together, rather than the application being tied to a single machine or cluster. The configuration scripts run automatically whenever the application is deployed. IaC ensures that the application environment is always the same, no matter where that environment runs while eliminating the need to set up environments manually. IaC also allows you to create as many temporary environments as necessary to support the application development process while ensuring that everyone is working in the same environment.

10. Choose the right tools and technologies.

To implement a successful DevOps environment, you need tools that increase efficiency and simplify tasks. But keep in mind the second guideline about taking baby steps. You don’t need every DevOps tool out there, and you don’t need to implement every chosen tool at once. Select your tools carefully, looking for those that integrate easily with other systems, facilitate automation, foster communication and collaboration, and provide visibility into your DevOps environment and processes.

A DevOps environment can require a wide range of tools. A source control solution is, of course, a given, but you’ll also need tools for automating infrastructure, monitoring systems and applications, integrating security, tracking tasks and development cycles, managing database releases, and carrying out several other processes. Fortunately, many vendors now offer solutions that support DevOps application delivery, but they differ, so you’ll need to evaluate them carefully.

As part of this process, you should take into account the architectures and technologies you’ll be employing. I’ve already pointed to IaC as an essential strategy, but there are other technologies that can also be effective in supporting DevOps, such as containers or microservices, all of which require their own set of tools. In addition, you should evaluate the extent to which you might be using cloud technologies to augment or host your DevOps operations and how to integrate systems between platforms.

Tailor-made DevOps

There are, of course, other considerations than what I’ve discussed here, and those that I have discussed could easily justify articles of their own. Even so, what I’ve covered should help you get started in planning your DevOps strategy and provide an overview of some of the factors to take into account as part of that process.

Keep in mind, however, that each DevOps implementation is unique and should be tailored to your organization’s specific needs and circumstances. Although DevOps has proven a valuable strategy for many organizations, there is no one-size-fits-all approach to implementing DevOps, and even if there were, there would be no guarantee that each implementation would work exactly the same in all circumstances. A DevOps environment should be designed to fit your organization’s requirements and improve application delivery, not make the process more difficult for everyone involved.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More
« Older posts
  • Recent Posts

    • GIVEN WHAT HE TOLD A MARINE…..IT WOULD NOT SURPRISE ME
    • How the pandemic is accelerating enterprise open source adoption
    • Rickey Smiley To Host 22nd Annual Super Bowl Gospel Celebration On BET
    • Kili Technology unveils data annotation platform to improve AI, raises $7 million
    • P3 Jobs: Time to Come Home?
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited