• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Deployment

Cloudflare acquires Linc to automate web app deployment

December 22, 2020   Big Data

Cloudflare today acquired Linc, an automation platform to help front-end developers collaborate and build apps, for an undisclosed amount. Cloudflare says this will accelerate development of its front-end JAMStack hosting platform — Cloudflare Pages — to enable “richer” and “more powerful” full-stack applications.

A major challenge in front-end web development is moving from a static site — i.e., a directory of HTML, JS, and CSS files — to a fully featured app. While companies benefit from the flexibility of being able to render everything on-demand, their maintenance costs increase because they now have servers they need to keep running. Moreover, unless they’re already operating at a global scale, they’ll often see worse end-user performance as requests are served from one or a few locations worldwide.

Linc and Cloudflare’s solution is the Frontend Application Bundle (FAB), a deployment artefact that supports a range of server-side needs, including static sites, apps with API routes, cloud functions, and server-side streaming rendering. A compiler generates a ZIP file that has two components: a server file that acts as a server-side entry point and an assets directory that stores the HTML, CSS, JS, images, and fonts that are sent to the client. When a FAB is deployed, it’s often split into these component parts and deployed separately. Assets are sent to a low-cost object storage platform with a content delivery network in front of it, and the server component is sent to dedicated serverless hosting.

 Cloudflare acquires Linc to automate web app deployment

Pages, which launched earlier this month, works with JAMstack (short for “JavaScript, APIs, and Markup”) to separate front-end pages and UI from backend apps and databases. It integrates with GitHub repositories and competes directly with Netlify or Vercel, two cloud hosting companies that let companies build and deploy sites using JAMstack frameworks.

“Linc’s goal was to give front-end developers the best tooling to build and refine their apps, regardless of which hosting they were using,” Linc CTO Glen Maddern wrote in a blog post. “But we started to notice an important trend —  if a team had a free choice for where to host their front-end, they inevitably chose [Cloudflare]. In some cases, for a period, teams even used Linc to deploy a FAB … alongside their existing hosting to demonstrate the performance improvement before migrating permanently.”

In the near future, Maddern, who has joined Cloudflare, says Linc’s team will focus on expanding Pages to cover “the full spectrum” of apps. Moreover, he says it will work toward the broader goals of enabling customers to “fully embrace” edge-rendering and making global serverless hosting “more powerful and accessible.”

The Linc acquisition comes after Cloudflare purchased S2 Systems, a Seattle-area startup developing a remote browser isolation solution. It’s Cloudflare’s first purchase of 2020 and its fifth since 2014.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

The Staging Phase of Deployment

March 20, 2020   BI News and Info
SimpleTalk The Staging Phase of Deployment

Staging is a vital part of doing the deployment of any application, particularly a database quickly, efficiently, and with minimum risk. Though vital, it only gets noticed by the wider public when things go horribly wrong.

On 2nd February 1988, Wells Fargo EquityLine customers noticed, on the bottom of a statement, this message

“You owe your soul to the company store. Why not owe your home to Wells Fargo? An equity advantage loan can help you spend what would have been your children’s inheritance.”

A few days later, the company followed this with an apology:

“This message was not a legitimate one. It was developed as part of a test program by a staff member, whose sense of humor was somewhat misplaced, and it was inadvertently inserted in that day’s statement mailing. The message in no way conveys the opinion of Wells Fargo Bank or its employees.

James G. Jones, Executive Vice President, South Bay Service Center”

This mishap was an accident in Staging. It is so easy to do. It could have been worse. In early 1993, a small UK-based company was working with one of the largest UK telecom companies in launching a new ‘gold’ credit card for their wealthier customers. They needed a mailshot. The mailshot application was in Staging and as not all of the addresses were ready, where there was a NULL name, the programmer flippantly inserted the place-holder ‘Rich Bastard’, before leaving the project. Sadly, the mail addresses that the test mailshot went to were from the live data. His successor unwittingly ran the test. The rest of the story is in IT History. The blameless developer running the test, nonetheless, left the profession and became a vet.

Staging rarely causes a problem for the business: far more frequently, it can save the costly repercussions of a failed deployment. Had the UKs TSB (Trustee Saving Bank) engaged in conventional, properly conducted, staging practices in 2018, they’d have saved their reputation and costs of 330 million pounds. Staging allows potential issues and concerns with a new software release to be checked, and the process provides the final decision as to whether a release can go ahead. The fact that this vital process can occasionally itself cause a problem to the business is ironic. Mistakes in Staging can be far-reaching. It requires dogged attention to detail and method.

Staging

It is often said that the Customers of an application will think it is finished when they see a Smoke and Mirrors demo. Developers think it is done when it works on their machine. Testers think it is complete when it passes their tests. It is only in Staging, when a release candidate is tested out in the production environment, can it be said to be ready for release.

The team that conducts the Staging process within an organisation have a special responsibility. If a production application that is core to the organisations business is being tested, the senior management have a responsibility to ensure that the risk of changes to the business is minimised. They usually devolve this responsibility, via the CIO, to the technical team in charge of Staging, and it is highly unusual that management would ignore the recommendations of that team. If the two don’t communicate effectively, things go wrong. In the spectacular case of TSB,  the release was allowed to proceed despite there being 2,000 defects relating to testing at the time the system went live.

Whether you are developing applications or upgrading customised bought-in packages, these must be checked out in Staging. The Staging function has a uniquely broad perspective on a software release, pulling together the many issues of compliance, maintenance, security, usability, resilience and reliability. Staging is designed to resemble a production environment as closely as possible, and it may need to connect to other production services and data feeds. The task involves checking code, builds, and updates to ensure quality under a production-like environment before the application is deployed. Staging needs to be able to share the same configurations of hardware, servers, databases, and caches as the production system

The primary role of a staging environment is to check out all the installation, configuration and migration scripts and procedures before they’re applied to a production environment. This ensures that all major and minor upgrades to a production environment are completed reliably, without errors, and in a minimum of time. It is only in staging that some of the most crucial tests can be done. For example, servers will be run on remote machines, rather than locally (as on a developer’s workstation during dev, or on a single test machine during test), which tests the effects of networking on the system.

Can Staging be Avoided?

I’ve been challenged in the past by accountants on the cost of maintaining a staging environment. The simplest answer is that it is a free by-product of the essential task of proving your disaster-recovery strategy. Even if no significant developments of database applications are being undertaken in an organisation, you still need a staging environment to prove that, in the case of a disaster, you can recover services quickly and effectively, and that an action such as a change in network or storage has no unforeseen repercussions with the systems that you, in operations, have to support in production. You can only prove that you can re-create the entire production environment, its systems and data in a timely manner by actually doing it and repeating it. It makes sense to use this duplicate environment for the final checks for any releases that need to be hosted by the business. Not all the peripheral systems need to be recreated in their entirety if it is possible to ‘mock’ them with a system with exactly the same interface that behaves in exactly the same way. It isn’t ideal, though: The more reality that you can provide in staging the better.

Staging and Security

Before the Cloud blurred the lines, it was the custom in IT that Staging was done entirely by the operational team in the production setting, which meant a separate Ops office or data-centre. This meant that security for Staging was identical to Production. In a retail bank, for example, where I once worked as a database developer, the actual client data would be used. As my introductory stories illustrated, this could lead to highly embarrassing mistakes. However, security was excellent: To get into the data centre where Staging was done, you needed a key fob, and your movements were logged. There was close supervision, video surveillance, and nobody got a key fob without individual security vetting. It was ops territory, though I was able to call in to check a release because I’d been security-checked. This was a rigorous process that took weeks by a private investigator, an avuncular ex-cop in my case with the eye of a raptor. I explain this to emphasise the point that if the organisation has the security and organisational disciplines, you can use customer data within a production environment for Staging. Without stringent disciplines and supervision, it simply isn’t legally possible. The GDPR makes the responsible curation of personal data into a legal requirement. It doesn’t specify precisely how you do it.

It isn’t likely that you’d want a straightforward copy of the user data, though. Leaving to one side the responsibility that any organisation has for the owners of restricted, personal or sensitive data, it really mustn’t ever contain such data as client email, phone numbers, or messages that are used for messaging, alerts, or push notifications. Any data, such as XML documents of patients’ case histories for example, or anything else that is peripheral in any way to the objectives of Staging, ought to be pseudonymized. The Staging environment is as close as possible to production so that the final checks to the system and the database update scripts for the release are done realistically, but without being on production. As Staging is under the same regime as production, there isn’t a risk to data above that of the production system.

What is In-scope for Staging?

It is difficult to make any hard-and-fast rules about this because so much depends on the size of the organisation, the scale of the application, and the ongoing methodology within IT. Staging is often used to preview new features to a select group of customers or to check the integrations with live versions of external dependencies. Sometimes Staging is suggested as the best place for testing the application under a high load, or for ‘limit’ testing, where the resilience of the production system is tested out. However, there are certain unavoidable restrictions. Once a release enters Staging, there can only be two outcomes: either the release is rejected or goes into production. Staging cannot therefore easily be used for testing for integrity, performance or scalability for that particular release because that requires interaction. It can, of course, feed the information back for the next release, but generally It is much better done in development interactively using standard data sets before release so that any issues can be fixed without halting the staging process. Sometimes there is no alternative If the tests done in Staging show up a problem that is due to the pre-production environment such as data feeds or edge cases within the production data, then the release has to go back to development. Nowadays, where releases are often continuous, Staging can be far more focused on the likely problems of going to production since there are more protections against a serious consequence, by using safety practices such as blue/green, feature toggles, canary or rolling releases

How to Make it Easier for a Deployment in Staging

The most obvious ways of making Staging easy, even in the absence of DevOps, is to involve operations as early as possible in the design and evolution of a development, to tackle configuration and security issues as early as possible, and for the development team to develop techniques for automating as much as possible of the deployment of the application. The ops teams I’ve worked with like the clarity that comes with documentation. Some people find documentation irksome, but documents are the essential lubricant of any human endeavour because they remove elements of doubt, confusion and misunderstanding. As far as development goes, the documentation needs to cover at least the interfaces, dependencies and configuration data. I always include a training manual and a clear statement of the business requirements of the application. Because the Staging team have to judge the ‘maintainability’ of the system, they will want to see the instructions that you provide for the first-line responders to a problem for the production system.

Staging Issues

Several different factors have conspired to make the tasks of the Staging team more complex. These include the demands for continuous integration and rapid deployment. This has made the requirement for automation of at least the configuration management of the Staging environment more immediate. The shift to Cloud-native applications has changed the task of Staging, especially if the architecture is a hybrid one, with components that are hosted, and the presence of legacy systems that contribute data or take data.

As always, there are developments in technology such as the use of containers, perhaps in conjunction with a microservices architecture, that can provide additional challenges. However, these changes generally conspire to force organisations to maintain a permanent staging environment for major applications in active development. If there is a considerable cloud component, this can amplify the charges for cloud services unless ways can be found to do rapid build and tear-down.

Against this rise in the rate of releases and the complexity of the infrastructure, there is the fact that a regularly performed system will usually tend to become more efficient, as more and more opportunities are found for automation and cooperative teamwork.

It is in staging that DevOps processes, and a consistent infrastructure, can help. However, it is crucial to make sure that the specialists who have to check security and compliance have plenty of warning of a release candidate in Staging and are able to do their signoffs quickly. They also may need extra resources and documentation provided for them, so it is a good idea to get these requirements clarified in good time.

An acid test for a manager is to see how long a new team member takes to come up to speed. The faster this happens, the more likely that the documentation, learning materials, monitoring tools, and teamwork are all appropriate. There must be as few mysteries and irreplaceable team members as possible.

Conclusion

Despite the changes in technology and development methodology over the years, the Staging phase of release and deployment is still recognisable because the overall objectives haven’t changed. The most significant change is from big, infrequent releases to small, continuous releases. The problems of compliance have grown, and the many issues of security have ballooned, but the opportunities for automation of configuration management, teamwork and application checks have become much greater. Because releases have a much narrower scope, the checks can be reduced to a manageable size by assessing the risks presented by each release and targeting the checks to the areas of greatest risk. The consequences of a failed release can be minimised by providing techniques such as feature-switching for avoiding rollbacks. However, seasoned ops people will remember with a shudder when a small software update caused a company to lose $ 400 million in assets in 45 minutes. Staging is important and should never be taken for granted.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Explorium raises $19 million to unify AI model training and deployment

September 11, 2019   Big Data
 Explorium raises $19 million to unify AI model training and deployment

Explorium, a Tel Aviv-based startup developing what it describes as an automated data and feature discovery platform, today announced that it’s raised $ 19 million total across several funding rounds. Emerge and F2 Capital contributed $ 3.6 million in a seed raise, and Zeev Ventures led a $ 15.5 million in series A.

The influx of capital comes after a banner year for Explorium, during which says it nabbed Fortune 100 customers in industries ranging from financial services to consumer packaged goods, retail, and ecommerce. “We are doing for machine learning data what search engines did for the web,” said Explorium CEO Maor Shlomo, who together with cofounders Or Tamir and Omer Har previously led large-scale data mining and organization platforms for IronSource, Natural Intelligence, and the Israel Defense Forces’ 8200 intelligence unit.

Explorium’s platform acts like a repository for all of an organization’s information, connecting siloed internal data to thousands of external sources on the fly. Using machine learning, it automatically extracts, engineers, aggregates, and integrates the most relevant features from data to power sophisticated predictive algorithms, evaluating hundreds before scoring, ranking, and deploying the top performers.

Lenders and insurers can use Explorium to discover predictive variables from thousands of data sources, Shlomo explains, while retailers can tap it to forecast which customers are likely to buy each product. “Just as a search engine scours the web and pulls in the most relevant answers for your need, Explorium scours data sources inside and outside your organization to generate the features that drive accurate models,” he added.

Within Explorium, data scientists can add custom code to incorporate domain knowledge and fine-tune AI models. Additionally, they’re afforded access to tools designed to help uncover optimization-informing patterns from large corpora.

“Explorium’s vision of empowering data scientists by finding relevant data from every possible source in scale and thus making models more robust is creating a paradigm shift in data science,” said Emerge founding partner Dovi Ollech. “Working with the team from the very early days made it clear that they have the deep expertise and ability required to deliver such a revolutionary data science platform.”

Explorium joins a raft of other startups and incumbents in the burgeoning “auto ML” segment. Databricks just last month launched a toolkit for model building and deployment, which can automate things like hyperparameter tuning, batch prediction, and model search. IBM’s Watson Studio AutoAI — which debuted in June — promises to automate enterprise AI model development, as does Microsoft’s recently enhanced Azure Machine Learning cloud service and Google’s AutoML suite.

IDC predicts that worldwide spending on cognitive and AI systems will reach $ 77.6 billion in 2022, up from $ 24 billion in revenue last year. Gartner agrees: In a recent survey of thousands of businesses executives worldwide, it found that AI implementation grew a whopping 270% in the past four years and 37% in the past year alone.

Sign up for Funding Daily: Get the latest news in your inbox every weekday.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

How to manage your data connections, speed up deployment and improve collaboration

August 15, 2019   BI News and Info

If you’re reading this, you’ve probably experienced some of the pain associated with managing data source connections. If you’ve spent a good bit of time replacing connections while moving a process to production, struggled with collaboration within your team, or have simply found the current feature set too rigid, we have good news for you.

In the 9.3 release we introduced a new way of managing data connectivity. It allows you to easily and securely share connections via Server, move processes from one Server to another, and manage your organization’s connections at scale. This post will show why and how you should use this new feature.

Benefits of the new architecture

Share connections securely

Collaboration with your colleagues can be quite tricky, especially when you’re kicking off a project or when a new member joins your team. You want to make sure that she has access to all the necessary data sources and becomes productive as soon as possible. To date this required distributing connection configurations manually and/or using global credentials, making user management tedious.

As of the 9.3 release, our recommendation is to use Server for distributing and managing data connections. Setting up the necessary access rights and the new Vault service will provide you all the tools to share connections in a secure and scalable manner.

Deploy processes with ease

Many users of RapidMiner Server prefer to keep data sources and their access rights separate for their development and production environments. This practice has great advantages in making production more stable when committing changes. Unfortunately, doing this with the old connection architecture could be an error-prone task, due to the required manual effort to find and replace every connection in the deployed process.

The new architecture significantly speeds up this process and makes it more robust. By defining semi-absolute paths (e.g. /Connections/data warehouse) when referencing a connection one can copy and paste a process from one Server to another and it will automatically work, without human intervention. No need to manually check every operator. Studio will check the path to open up the appropriate connection for accessing data.

How to use the new architecture

Create a connection on Server

What makes the new solution versatile is the fact that it’s become part of RapidMiner’s repository system. Permissions to view, edit, execute can be granted or revoked just as with other repository items and connections can be dragged into the process canvas. In order to create one, just press create connection in the top menu bar, or in the right click menu of your local or server repository.

Enter the necessary information such as connection type (Database, S3, Azure etc.), name and location of the connection, and add a description or tags for better management.

Selecting database type will pre-fill the generic properties of the connection, so the only thing you will need to enter are user credentials, host and database. At this point you are ready to use the database connection in a process on Server or Studio, but the newly created connection will be available for every Server user with all the parameters you’ve just set.

Securely store user-level values

In cases where access rights to data are controlled with personal credentials, storing values in the connection configuration can be a security issue. Instead of entering your credentials, set them as injected parameters and mark Server as the intended source of these values. This setting determines what sources Studio will contact to retrieve the necessary information for initializing the connection. After the setup is saved, the Vault service in RapidMiner Server needs the user values. In this example we used Username and Password as parameters that need secure storage, but any other one could be marked and set from the Vault.

RapidMiner Vault: A recently introduced service to RapidMiner Server to stores values, that can only be accessed by the user. Each Server has its own Vault service, inheritance or copying is not yet possible across Servers.

As a last step, visit the Server page and find the connection by name in Repository / Connections. You can easily spot configs with missing values as they are marked with a warning sign (     ). Each item in the list is a unique representation of a connection in the related Remote Repository in Studio. “Show details” will open up the configuration. Add the necessary details with “Set injected values”.

Saving these values will complete the creation process. With this configuration anyone else trying to use the connection, will have to visit Server and add their own personal values.

Move processes freely between environments (Servers)

An additional benefit of using Vault Service is that it improves process deployment. As the Vault itself is unique per Server, connections can receive different values depending on the environment. One particular example can be to use credentials of a user with Write access rights when working on the process on the development Server, but use Read only rights in production. This can be easily achieved with 2 Servers, same relative paths and connection name in the process. Copy pasting the process is the only action needed to move it into and run it in a different environment.

Summary

To sum up, the new way of managing data connectivity will allow you to create a connection securely via Server, and to speed up RapidMiner process deployment. We encourage all of our users to start managing their connections in this newly introduced way. It will make you and your team more productive and create a more organized repository with clear dependencies between processes and connections. For more information, take a look at our ‘Connect to your Data’ documentation page.

The upcoming releases will include further improvements to the feature, including a semi-automatic migration tool to help with replacing old connections with new ones even in processes. Until then, you can still use, create and edit legacy ones.

Let’s block ads! (Why?)

RapidMiner

Read More

How Sisense Engineered Its Cloud-Native Linux Deployment From the Ground Up

July 19, 2019   Sisense

A couple of weeks ago, we officially launched the Cloud-Native Sisense on Linux deployment after a successful beta release cycle that kick-started in Spring 2019. 

As of 2017, Linux was running 90% of the public cloud workload. It is increasingly the OS of choice by enterprises and the cloud due to its many advantages: lower TCO, higher security, improved stability and reliability, flexibility, and more. Given this importance, we made it an organizational priority to invest in a Sisense on Linux deployment in late 2017. 

When we sat down to plan this execution strategy, we realized there were several different ways we could approach it. For us, it was critical to not only do it right so we didn’t waste time and resources but also to deliver a product that would lead our customers into the future and support their needs in the ever-growing cloud environment.

Here’s how we re-architected Sisense with the right technologies and frameworks for the task at hand without simply porting the code over.

The First Few Months

When I was tasked with the responsibility of building a Sisense Linux deployment in late 2017, a few small steps had already been taken. Two developers had started a Linux project which initially comprised of simply porting code from one OS to the other. 

They started with C/C++ code, which usually takes the longest to migrate from Windows to Linux. By the end of 2017, the team was able to show their first demo, which ran queries over the ElastiCube Manager, our high-performance database, using C/C++.

Taking Stock and Restarting the Project

Even though some more progress was made, in January of 2018, we decided to take a step back and rethink our approach to this project. Often it is less scary to take what you have and what you know and continue without questioning your approach. However, it is not always the smartest or the best course of action.

Before jumping headlong into merely porting code from one OS to the other, it was necessary to think whether it made sense to migrate all components as-is or instead, to see what language or architecture would work best for the task at hand. 

We decided that where it was required and where it made sense, we would not simply port over code but rebuild the component from scratch using the most relevant stack and technology for what that component was meant to do. 

There were three “buckets” in this decision-making process: 

  • Components that would be migrated. 
  • Components that would be rebuilt from scratch using the right framework while maintaining institutional knowledge and providing a similar user experience. For example, we concluded that several components had to be rewritten in Java. To enable this, we dedicated more than a month to training the entire engineering team in Java. We also recruited Java experts to help guide and govern the design.
  • Components that would stay as-is (for example, JavaScript) with very minimal changes such as updating file names and paths.

In hindsight, this was a critical decision that paved the way for a modern, enterprise-grade, full-stack analytic application that is highly-performant, reliable, and scalable. The best part is that we were able to build it in a little over a year. 

The Right Technology for the Job

cloud native 2 770x433 How Sisense Engineered Its Cloud Native Linux Deployment From the Ground Up

Let’s break this down some more. The Sisense application has a few key tasks handled by different components:

1. Sisense ElastiCube or Data Engine

The Sisense ElastiCube crunches hundreds of millions of records and needs to be highly optimized. It has to be close to the OS for better control of what is being done with less overhead. Most of this code was in C and C++ and was left that way.

Takeaway: C & C++ are good to use when building highly optimized processes that are close to the OS, such as building a database engine.

2. ElastiCube Management Service and Query Service

The ElastiCube Management Service and Query Service were moved away from C# and C++ and rebuilt in Java. Java is a highly-portable and mature language that plays well in building mission-critical, high-performance applications that are CPU-intensive. The agility and complexity needed in those components are such that we needed to use a lot of frameworks that come with Java and focus only on our application logic without compromising on performance.  

We already had (and continue to have) a big footprint in Node.js. It would have been easier to use Node.js everywhere. However, we resisted the urge to use Node.js everywhere and use the best language and framework for the job.

Node.js is great for responsive operations with low memory footprint. It is easier to write in Node.js and is fast to debug and develop as well. However, Java has much better performance, more caching, and long state capabilities. Java is also more suited for compilation and type checking, which is important, especially when merging releases and branches over the years.  These actions can have a lot of vulnerability if not caught by compilation errors.

For example, the Management service needs to be aware of all the statuses and aware of Kubernetes with a lot of control of the systems. It made sense to build it in Java as the service needs to be efficient, highly available and multi-threaded.

On the other hand, application parts that are more tightly integrated with the UI, are easier to build it in Node.js. For example, the original pivot was implemented in C# as an IIS application. The pivot is a full stack component. It made sense to rewrite it in Node.js which allows the full stack developer to work on both the front-end and back-end in the same technology.

For web services, it’s not recommended to use C++ because the development time is too expensive. For those reasons, eventually, we decided to go with Java and, in particular, used the Spring Boot framework. We also considered a few options like Guice or EJB (which we immediately disqualified).

Takeaway: Java is useful when building mission-critical, high-performance applications that are CPU-intensive with the need for more caching, long state capabilities, and a robust set of available frameworks. Node.js, on the other hand, is useful for responsive operations with a low memory footprint and when a developer wants to work on both the front-end and back-end in the same technology (which is the genesis of Node.js).

3. Data connectors

The .NET connector-framework was replaced with a new framework based on Java because the support for .NET on Linux is via .NET Core, which was introduced in 2016, and does not contain all the functionality of the .NET framework for Windows. The connector framework acts as a pipe for transferring data. On top of this, the actual drivers for accessing most of the database providers are written in Java, so it was only natural to code the framework in Java too. The actual data crunching is done inside the ElastiCube, which is coded in C/C++.

Takeaway: Java is a natural choice for building data connectors due to its large ecosystem including database drivers and rich frameworks.

In summary, there are certain languages most appropriate for certain operations, and choosing the correct language for the operation at hand is key.

Guidelines on Choosing the Right Technology for the Job

Containerizing Microservices

Another critical change in the Linux deployment was related to the architecture itself. While many components in the Windows deployment are microservice based, given the opportunity to re-architect Sisense, we decided to build a containerized microservices application using Docker for containerization and Kubernetes for orchestration.

We initially debated between Docker Swarm and Kubernetes for orchestration but decided to go with Kubernetes due to the rising popularity and the fact that Kubernetes was becoming the de-facto standard for container orchestration. While our teams were comfortable with Docker Swarm, which is considered more of the DevOps way, Kubernetes better handled other developer requirements like versioning, upgrades, releases, and rollbacks. We decided to go with Kubernetes keeping the future developer user in mind.

An interesting debate that comes with building a microservices architecture is the number of microservices you’ll break your application into.

Two years ago, we had a fairly monolith application with four or five services. That is not the case anymore. We have around 20 services today. As a rule of thumb, we try not to create too many microservices, especially ones that lengthen the call chain. It is okay to add services that are not on the call chain. In a given operation, we shouldn’t involve all the microservices in the call chain (for example, 4-5 services is okay but not all available services). It is important to remember that while microservices are a great way of building scalable and resilient applications rapidly, they also add complexity, especially with communication between them and eventually debugging. You need to find a balance between the number of microservices you create, supportability, and maintainability. 

A New Way of Doing Things with Shared Storage, Updated Monitoring & Logging

Re-architecting the platform also gave us the opportunity to update old ways of doing things and create better and highly-performant new ways. For example, the Windows way of creating highly-available data is to store copies of the data on multiple servers. With this re-architecting, we were able to do away with that and rebuild that experience enabling the use of highly-distributed and available Shared Storage technologies like cloud storage providers, GlusterFS, Amazon EFS, Azure file share, Google Filestore, and many more.

Another example is logging. One of the challenges with building a microservices-based architecture is debugging because of the number of components involved and all of the different places logs can be stored. One of the first steps we took to alleviate this was to build a combined log using FluentD, which collects all the data in a centralized place. In addition, we added Grafana and Prometheus, which provide counters of what’s going on in the system by providing a detailed view of system metrics. 

Learnings Along the Way

While we have come out on the other side of a successful project, the journey was not without difficulties. Some of these were challenges that we learned from and others were limitations that we have had to work with in order to provide the best experience for the end user.

1. Embracing open source technologies

We learned that embracing well-tested and mature open source technologies are game changers in how quickly and efficiently we can build a large-scale, enterprise grade application. This tech is not something to be afraid of. Better yet, some of these technologies provide us with a completely different way of thinking about the problem (like the shared storage solution).

2. Wiping code and rebuilding where needed

We learned not to be afraid to wipe out code and rebuild. Today, we look back at a small portion of a component which we left in C++ and realize that it was a mistake. We could have saved time and done a better job by simply rewriting it. Keeping the code of C components that were not originally written for multi-threaded operations instead of rewriting them to make them multi-threaded was eventually more expensive. 

3. Keeping customer and end-user experience in mind

When we embarked on the Sisense on Linux deployment, it was very clear to us that we wanted to provide the same user experience in both the Windows and the Linux deployments. 

A big reason for this was to ensure that we can use the carefully curated and built automated testing assets across both deployments. The automatic testing assets (various databases, different schemas, dashboards, validated results) were collected and built for the last couple of years. Keeping the same automatic testing assets was a top priority. The ability to test both deployments with the exact same assets is an important tool to ensure we were retaining data integrity between the two systems. This meant that, in certain areas, we choose not to change something on the front-end that we could have changed in order to ensure the end-user experience was not affected. 

We also wanted to make the transition process from Windows to Linux (if asked for) to be quick and painless. To address this, we built a migration tool that allows our customers to move over all the work assets from Windows to Linux seamlessly so that they do not have to worry about rework.

4. Organization-wide focus and cross-company collaboration

A critical component of our success lay in cross-collaboration across R&D teams, and later with non-R&D teams, across the company. The Linux deployment is a completely new platform that touches every aspect of our organization and, at any given point, we had a significant number of Sisense developers contributing to it. 

Additionally, this required changes outside of R&D.

  • Technical support teams needed to know how to debug issues and support customers using a completely new OS and new technology. 
  • Pre-sales engineers needed to know how to successfully install and demo the new deployment to customers and needed to learn about the details of the new architecture. To facilitate the training of the tech teams, the teams not only subscribed to external courses but we also flew internal R&D trainers around the globe to share and educate the teams at various sisense locations. 
  • Sales and marketing teams also needed to become familiar with cloud-technology and the benefits of the Cloud-Native Sisense on Linux deployment in order to convey these benefits to customers and prospects. 

It was essential to garner buy-in across the organization with the full-support and prioritization coming from senior leadership. Without a vision and cross-organization goals, no project like this could come to fruition.

Summary

The Cloud-Native Sisense on Linux deployment marked a milestone in our journey as we became the only data and analytics platform with an advanced containerized microservices architecture that is purpose-built from the ground up with best-of-breed-technologies like Docker containers and Kubernetes orchestration that can be deployed on the cloud or on-premises. It provides the full Sisense platform including the Elastic Data Hub, which offers both live in-database connectivity to all major cloud databases, as well as Sisense’s proprietary In-Chip™ Performance Accelerator. The deployment fits seamlessly into DevOps processes and enables faster delivery, resiliency, and scalability. 

We started this journey with a vision of building a true next-gen analytics platform that will lead the way in how organizations build large-scale analytic applications. We are proud of the platform being deployed with our customers today. 

We successfully made this transition in little over a year and while we had some setbacks and difficulties (as with any project), the decisions around how to approach this project — like not shying away from rewriting components where needed — not only sped up the process but also allowed us to build a platform that can provide the most value to our customers in the cloud and the web based world we work in. 

As we continue rolling out this new, full-stack Cloud-Native Sisense deployment, we are carefully working with teams across Sisense to make this a great experience for our customers and enable them to go from data to insights even faster in a highly-scalable and resilient environment.

CTA Linux 770x2501 770x250 How Sisense Engineered Its Cloud Native Linux Deployment From the Ground Up
Tags: Cloud | linux

Let’s block ads! (Why?)

Blog – Sisense

Read More

Best Practices for Microsoft Dynamics 365 Solution Management and Patch Deployment

June 12, 2019   Microsoft Dynamics CRM
crmnav Best Practices for Microsoft Dynamics 365 Solution Management and Patch Deployment

Do you find yourself making changes directly in Production? If you don’t get it perfect the first time, this can really impact your users, nullify user adoption and introduce unnecessary “down time.”

In this one hour webinar, we will cover best practices surrounding using different organizations to develop and test your changes. You will see how your changes can then be easily deployed to Production with minimal or no user impact. Solutions and Patches allow instant visibility into what changes have been made and by whom. If you have one system administrator or a whole team of system administrators & developers, patches allow you to co-configure your system without interrupting each other.

This webinar will provide you with best practices surrounding your Dynamics 365 organizations, publishers, solutions and patches. We will walk through different scenarios and processes. You will be able to see how easy it is to implement this in your Dynamics 365 organizations right away!

Make the most out of your Sandbox environment by using Solutions and Patches. No more production down-time.

Thursday, June 13th, 11:00 AM to 12:00 PM

Can’t attend live? You should still register! We will be sending out a recording to all registrants after the webinar.

Beringer Technology Group is a leading Microsoft Gold Certified Partner specializing in Microsoft Dynamics 365 and CRM for Distribution. We also provide expert Managed IT Services, Backup and Disaster Recovery, Cloud Based Computing and Unified Communication Solutions.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Power BI Premium Deployment and Management Whitepaper added!

February 28, 2019   Self-Service BI
social default image Power BI Premium Deployment and Management Whitepaper added!

A new whitepaper authored by Peter Myers presents a comprehensive body of knowledge covering all aspects of deploying, scaling, troubleshooting and managing a Power BI Premium deployment in an enterprise. The whitepaper provides both background information explaiing  how various Power BI concepts play together in an enterprise deployment as well as practical examples of challenges encountered in enterprises when scaling Power BI Premium based solutions, and how tools available to Power BI Service Administrators and Capacity Administrators, like the Premium Capacity Metrics app, are used to indicate causes for symptoms witnessed and devise solutions to these challenges

The white paper is available in the Power BI Docs site here

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

Amazon unveils AWS Inferentia chip for AI deployment

November 29, 2018   Big Data
 Amazon unveils AWS Inferentia chip for AI deployment

Amazon today announced Inferentia, a chip designed by AWS especially for the deployment of large AI models with GPUs, that’s due out next year.

Inferentia will work with major frameworks like TensorFlow and PyTorch and is compatible with EC2 instance types and Amazon’s machine learning service SageMaker.

“You’ll be able to have on each of those chips hundreds of TOPS; you can band them together to get thousands of TOPS if you want,” AWS CEO Andy Jassy said onstage today at the annual re:Invent conference.

Inferentia will also work with Elastic Inference, a way to accelerate deployment of AI with GPU chips that was also announced today.

Elastic Inference works with a range of 1 to 32 teraflops of data. Inferentia detects when a major framework is being used with an EC2 instance, and then looks at which parts of the neural network would benefit most from acceleration; it then moves those portions to Elastic Inference to improve efficiency.

The two major processes for what it requires to launch AI models today are training and inference, and inference eats up nearly 90 percent of costs, Jassy said.

“We think that the cost of operation on top of the 75 percent savings you can get with Elastic Inference, if you layer Inferentia on top of it, that’s another 10x improvement in costs, so this is a big game changer, these two launches across inference for our customers,” he said.

The release of Inferentia follows the debut Monday of a chip by AWS purpose-built to carry out generalized workflows.

The debut of Inferentia and Elastic Inference was one of several AI-related announcements made today. Also announced today: the launch of an AWS marketplace for developers to sell their AI models, and the introduction of the DeepRacer League and AWS DeepRacer car, which runs on AI models trained using reinforcement learning in a simulated environment.

A number of services that require no prior knowledge of how to build or train AI models were also made available in preview today, including Textract for extracting text from documents, Personalize for customer recommendations, and Amazon Forecast, a service that generates private forecasting models.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Best Practice: Apply Labels to Deployment Version

October 3, 2018   Microsoft Dynamics CRM
apply labels 300x225 Best Practice: Apply Labels to Deployment Version

When working with integration packages or any code we sometimes need to rollback all the changes. Using TFS Labels will let you take a snapshot of the files deployed which will help you to review, build, or rollback changes easily.

This blog will show you the step-by-step process on how to apply labels.

1. Right-click on the file or folder you would like to have a label.

092818 2141 BestPractic1 Best Practice: Apply Labels to Deployment Version

2. Add the Label.

092818 2141 BestPractic2 Best Practice: Apply Labels to Deployment Version

If you need roll back or view the labeled version of code, you can find the label as shown below:

In the Source Control, go to Find, and then go to Find Label.

092818 2141 BestPractic3 Best Practice: Apply Labels to Deployment Version

3. Type in the label you would like to find.

092818 2141 BestPractic4 Best Practice: Apply Labels to Deployment Version

The Find button will display the matched list. Choose the label and retrieve the version for your files.

Looking for more Dynamics 365 tips? Subscribe to the PowerObjects blog for more!

Happy Dynamics 365’ing!

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

Microsoft Optimizes Delivery of Microsoft Dynamics 365 Cloud Version with Continuous Deployment Updates

August 24, 2018   CRM News and Info
CRM Blog Microsoft Optimizes Delivery of Microsoft Dynamics 365 Cloud Version with Continuous Deployment Updates

Microsoft is changing the way Microsoft Dynamics 365 updates will be delivered: major cloud updates will be deployed twice a year in April and October to provide new capabilities and functionality to all Dynamics 365 users. These updates will be backwards compatible, while regular performance updates will be rolled out throughout the year as before, ensuring business continuity for organizations.

Whereas customers were able to skip a cloud release, they will no longer be able to do so, as all users will now be required to migrate to the latest version with each new release. For this reason, all users will have to be on the new version before January 31, 2019.

What does this mean for your organization?

This new continuous update deployment will bring several benefits to users and organizations using Dynamics 365:

  • Organizations will enjoy up-to-date features and optimized performance. A cloud-based solution ensures immediate access to new functionalities, but organizations sometimes delay their migration to new versions. By ensuring that everyone is using the latest version, users will be able to take advantage of the latest features and capabilities as they are released.
  • The learning curve will not be as steep for your users. You will face less resistance to change within your organization. Users will have a much easier time adapting to the new features and interface changes with each new update, instead of having to contend with many significant changes at once.
  • Costs and downtime will be kept to a minimum. In the long run, skipping updates does not save you time or money. In fact, the longer you wait, the harder it becomes, as the migration path of the previous versions still has to be followed. Migrating with each update will ensure that the process is as smooth as possible every time
  • Microsoft will be able to offer better support and an improved platform to all users. Having all users on fewer versions will help Microsoft improve their features, performance and support, as their resources will be focused on a single platform.
  • New capabilities and third-party products can be tested ahead of time to avoid system disruptions. Partners and customers will be able to test capabilities and major updates in a sandbox environment in advance. Partners will be able to better prepare their products for general availability, while organizations can validate updates before the update release to avoid any disruption.

This continuous update deployment will make it important to follow best practices and use as many features out of the box as possible to ensure smooth migrations. That said, organizations will be able to accelerate their digital transformation in a consistent and predictable manner, overall increasing reliability and performance. For more information, please read the official announcement blog post by Microsoft.

By JOVACO Solutions, Microsoft Dynamics 365 implementation specialist in Quebec

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More
« Older posts
  • Recent Posts

    • Rickey Smiley To Host 22nd Annual Super Bowl Gospel Celebration On BET
    • Kili Technology unveils data annotation platform to improve AI, raises $7 million
    • P3 Jobs: Time to Come Home?
    • NOW, THIS IS WHAT I CALL AVANTE-GARDE!
    • Why the open banking movement is gaining momentum (VB Live)
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited