• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Build

Build and Release Pipelines for Azure Resources (Logic Apps and Azure Functions)

January 16, 2021   Microsoft Dynamics CRM

Today’s blogpost is lengthy but should serve as a fantastic resource going forward. We will explore Build and Release Pipelines for Azure Resources as follows: Introduction Creating a build pipeline Creating a release pipeline Task Groups Library Logic App parameters Enjoy (and bookmark!) Azure Pipelines is a fully featured continuous integration (CI) and continuous delivery (CD) service.

Source

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

How to build AI applications users can trust

December 28, 2020   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


To work effectively, algorithms need user data — typically on an ongoing basis to help refine and improve the experience. To get user data, you need users. And to get users, especially lasting users who trust you with their data, you need to provide options that suit their comfort levels now while still allowing them to change them in future. In essence, to get user buy-in, you need a two-step approach: Let users know what data you want to collect and why, and give them control over the collection.

Step 1: Providing continuous transparency

The first step in finding the balance is to equip your users with knowledge. Users need to know what data is being collected and how that data is being used before they decide to engage with an application. Already, mounting pressure on the industry is steering the ship in this direction: Apple recently announced a privacy label for all of its applications that will promote greater awareness for users around what data is being collected when they use their apps. Microsoft’s CaptionBot, below, is a good example of how to give users an easy-to-understand overview of what’s happening with their data behind the scenes.

Microsoft’s CaptionBot offers clear information about data storage, publication and usage as well as an easy-to-understand overview of the kinds of systems working behind the scenes to make the AI captioning tool work.

Above: Microsoft’s CaptionBot offers clear information about data storage, publication and usage as well as an easy-to-understand overview of the kinds of systems working behind the scenes to make the AI captioning tool work.

Health app Ada, below, is an example of how to avert user confusion over data collection choices.

Above: Health app ada explains the logic behind its input selections at the outset, so users can understand how their inputs affect the application and its ability to perform the desired actions.

Not only does sharing this information upfront give users a sense of empowerment and help build trust with your experience over time, it also gives you an opportunity to help them understand how sharing their data can improve their experience — and how their experience will be diminished without that data. By arming users with information that helps them understand what happens when they share their data, we also arm them with the tools to understand how this exchange can benefit them, bolstering their excitement for using the app.

In addition to these details upfront, presenting users with information as they use the application is important. Sharing information about algorithm effectiveness (how likely the algorithm is to succeed at the task) and algorithm confidence (how certain the algorithm is in the results it produced) can make a big difference when it comes to user comfort in engaging with these technologies. And as we know, comfort plays a major part in adoption and engagement.

Consider the confidence ratings Microsoft offers in some of its products, below.

Above: When an algorithm is making a “best guess”, displaying a confidence rating (in the first image using Microsoft’s Bing Image Search, a rating between 0 and 1, and in the second from Microsoft’s Celebs Like Me, a percentage rating) helps users understand how much trust they should place in the outcomes of the algorithm.

Users should be given insight into some of the operations and mechanics, too. So it’s important to acknowledge when the mechanisms are at work or “thinking”, or when there’s been a hand-off from the algorithm to a human, or when data is being shared to third-party systems or stored for potential later applications. Continually offering up opportunities for building awareness and understanding about your application will lead to higher levels of trust with using it. And the more users trust it, the more likely they will be to continue to engage with it over time.

Step 2: Handing over control

Even when the benefits of an application are compelling enough for users to opt in, users don’t necessarily want to use AI all the time. There may be circumstances when they want to withdraw from or limit the amount they engage with the technology. How can we empower them to choose the amount of AI they interact with at the moments that matter most? Again, a combination of upfront and semi-regular check-ins works well here.

When informing users about what data you’re collecting and how it’s being used, give them the chance to opt out of sharing certain types of data if the use case doesn’t meet their needs. Where possible, present them with a graduated series of options — what you get when you enable all data sharing versus some versus none — to allow them to choose the option that makes the most sense for them.

Consider the example below from food-ordering app Ritual.

Above: Popular food ordering app Ritual allows users to opt of sharing certain data and also informs users of how opting out will impact the application’s functionality.

Whenever you add a new product feature or a user engages with a feature for the first time, prompt them to look at or change their level of data sharing. What may not have seemed relevant to them before could be very compelling with a new use case presented. And if a new type of data is being collected, prompt them again.

One final way to offer up control: Give users the chance to direct the application. This can mean simply checking in with your users from time to time about what features they like, which ones they don’t, and what they want from your application. Or, more importantly, it can be as a part of the application itself. Can users adjust the level of certain inputs to produce different results (e.g. weighting one input over another for a recommendation algorithm)? Can they go back a step or override certain aspects manually? Handing over the controls in as literal a sense as possible helps users feel empowered by the application instead of intimidated by it.

Youper’s AI therapy app provides a good example of how to offer users control.

Above: Youper’s AI therapy app doesn’t require users to set all parameters at the outset but instead offers up regular opportunities for them to refine their experience as they continue to engage with the application (and explains why it may help them to do so).

Every application is different, and every approach to empowering users will be a little different as a result. But when you offer transparency into how and why your system is taking in the information that it is, and you give consumers the chance to opt out of sharing certain pieces of information, you create space for trust. And when your users trust you, they’ll be more inclined to share the data you need to make your products and services come alive.

Jason Cottrell is Founder and CEO at Myplanet.

Erik von Stackelburg is CDO at Myplanet.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Learn Things So You Can Build Things — A Data Analyst’s Opinion

December 6, 2020   BI News and Info
pexels negative space 169573 2 Learn Things So You Can Build Things — A Data Analyst’s Opinion

This blog post is guest written by Tomi Mester of data36.com. 

When it comes to data science, it’s not about what you learn. It’s about what you are able to build with what you’ve learned.

The field of data science has been growing rapidly—especially in the last few years. We see exciting new tools and methods emerge all the time. And while these can be great, I feel that these can cause some confusion as well. Why? Because they make data professionals think about the wrong questions.

Asking the wrong questions

What do I mean by asking the wrong questions?

Examples of wrong questions might be:

  • What are the coolest new tools to try out?
  • What are the most exciting data science problems nowadays?
  • How can we fit these into our business (to experiment with them)?

Instead, we want to ask better questions like:

  • What business problems (or opportunities) do we have right now?
  • How can data help with this?
  • Why and how will our data project be useful for the company?
  • What should I learn to start building it?

Within data science, there is enormous hype around new tools every time a new machine learning algorithm is released. Or a new cloud-based solution is available. Or a new module is implemented for this or that programming language. And so on.

But aren’t these new tools important? Well, yes, but…

Tools are important, but with a caveat

Let’s think about an example from cook. You can’t cook soup without a spoon. But when eating the soup, very few people will say: “Hmmm, you have a pretty nice wooden spoon.” Instead, most of them will say: “Yum, this food tastes really good!”

And that’s because, at the end of the day, tools are just tools. You have to learn how to use them…

But that’s not the full sentence. It’s rather:

You have to learn how to use them so you can build useful things with them…

And that’s still not quite all.

You have to learn how to use them so you can build useful things with them that will have a positive impact on your business’s bottom line.

Maybe it sounds obvious written down. And if it is for you, that’s great. But I see many data professionals choose to focus on fancy data science solutions over the data science solutions they actually need. And then they hit a wall.

Unpopular opinion: most data scientists won’t need to know anything about deep learning

Let me give you just one example: deep learning.

I run a data science blog where I publish tutorials for aspiring data scientists on topics like the basics of Python or the basics of SQL, and so on.

And I get this question every week from someone: “When will you publish a tutorial on deep learning?”

And the answer is always the same: never.

Okay, I have to admit, I played around with the idea to quickly draft an introductory article on the topic… But it was tempting only for one reason: I know I’d get a lot of clicks for that article.

Most people want to learn about deep learning only because it’s popular. Why is it popular? Because it’s used for cool stuff, like self-driving cars at Tesla—and for that reason it gets a huge amount of media attention. That makes people excited and suddenly everyone wants to apply deep learning in their own projects.

But (at least in my opinion) it doesn’t work that way! A data science project should always start by defining the problem you want to solve. And once you have that, then you can choose the best tool to get the job done!

The naked reality is that in, most data science projects, there is a much higher demand for more traditional tools, like:

  • descriptive analytics and reporting
  • data cleaning and data wrangling
  • automating your processes
  • simple predictions and forecasting
  • simple classification methods

I know, at first, these sound less cool than deep learning… But believe me, when you are working on a real project, they are just as exciting (if not more)! Why? Because they get you useful information a lot more quickly than building trying to tackle a project with something complicated like deep learning. 

Let’s block ads! (Why?)

RapidMiner

Read More

How to Build a Flexible Developer Documentation Portal

October 8, 2020   Sisense

Developing analytic apps is a bold new direction for product teams. The Toolbox is where we talk development best practices, tips, tricks, and success stories to help you build the future of analytics and empower your users with the insights and actions they need.

When creating a resource and community to help developers get the most out of your product, it’s important to empower them to contribute to developer documentation and not just have all your content coming from product or tech writers. 

If you haven’t read how we overhauled our developer portal recently, check out our prior conversation with Moti Granovsky, Sisense’s Head of Developer Relations. In this second part, we will take a deeper dive and share how we built a new system from the ground up to both deliver great information and be flexible enough to allow for continued evolution and growth.

Let’s kick off our journey into the rebuild by understanding what our requirements were and how we went about meeting them.

Building instead of buying

Shruthi Panicker: Why did we choose to build the portal on our own from scratch and not just use one of the many great products out there?

Moti Granovsky : There are great products out there focused on developer documentation, which are used by many companies successfully. When we set about rebuilding the portal, we knew we wanted engineers to contribute to our developer documentation. We also wanted to auto-generate some of the content, especially API references (which are very dry; there’s no “writing” necessary). Our product has so many APIs (rich and large) that writing and maintaining them is very time-consuming. We didn’t want tech writers to waste time repeatedly updating tables and tables of parameters. 

We wanted to invest our time in generating content that is more engaging and valuable, like tutorials, open-source demos, cool features, and so on.

Make your requirements specific 

SP: What were you asked to do?

MG: Our requirements were, in essence:

  1. The platform should allow us to separate styling from content. We wanted to make sure that unlike the old website, it looks great, matches the Sisense brand, and keeps on looking great even as more content is added. This means not only did we want to white-label and design the content, we wanted to separate the layout and design from the content. (Remember, the people who are writing documentation are not necessarily experts at visual design.)
  2. The format should be accessible to developers (aka Markdown-based). Besides an UI-based editor, some content management systems have their own specific language to programmatically format content. However, oftentimes nobody knows enough about these languages, and they’re not very intuitive either. Instead, we looked at the most common way for developers to write content — Markdown. It is used by Git itself and all Git providers. Even readme files for projects are written in Markdown. Every developer knows this syntax, and it has this benefit of separating design from content. Markdown content will also allow us to auto-generate content. If you look around, most API tools like Swagger, for example, have the ability to generate Markdown content.
  3. Our customers should be able to contribute to docs (eventually). If a customer finds a typo or a mistake or they want to add some detail, we want to make that possible. For that, we should be able to store the content somewhere public, such as a GitHub repository. Markdown makes this possible since it’s text format and can be easily shared in GitHub. But we need the content to be separate from the website itself (something that is publicly available). If we were to write HTML pages and put them on a server, then a customer can’t contribute to that. But if those pages are generated from something that is publicly accessible like a GitHub repository, then we can solve this challenge.
  4. We should go beyond just the documentation. We wanted the new website to be a true developer portal — a hub of knowledge and information for developers with which they can find all the resources they need. We want to make sure that, in the future, it includes a blog, playground, links to community, etc. that are not necessarily documentation. We want to create a home for developers where they can come and find anything that they need to do their job.

We looked at products in the market, and none of them met all of these requirements combined. Additionally, they were relatively costly because they include hosting, but documentation is generally static content so it is cheap and easy to host. There was no reason we couldn’t build it ourselves since we have the technical know-how to do it in-house.

sisense blog Embedded Trends 20191218 bl blog banner How to Build a Flexible Developer Documentation Portal

Choose the right framework

SP: So what tools did you decide to use to build the new portal?

MG: We found out some companies (and a huge shoutout to the developer relations team at Okta whose approach to developer docs inspired us) have taken this different approach and built their own websites using different tools. There are frameworks that can generate websites from Markdown. We looked at different ones and picked one that fit the Sisense technology stack (in terms of languages that developers are familiar with. Since most of our developers are JavaScript developers, we wanted one that used that and not Python, for example).

We found one called VuePress that did exactly what we needed. It has all the flexibility in the world. It is an open-source framework built and used by the Vue.js project. It allowed us to build exactly the type of website we had in mind in terms of look and feel while avoiding a lot of the effort involved compared to starting from scratch.

We invested effort in creating a great design and refactoring a lot of the existing content we had into Markdown from the old format. However, this process let us build the new portal from the ground up exactly the way we want without any compromises. 

While it took a lot of effort initially, in the long term it will all pay off. First, the hosting is very cheap. Second, if we have to add new content, we don’t have to design anything. It will only be as expensive as writing that text. Anyone can just write text in a notepad and have beautiful content up on the website.

New portal, new options

SP: What can users expect to find in the new portal that they couldn’t in the old one? 

MG: As a result of the new design, there are a few big changes in the new Sisense developer portal:

  • Separation of API reference and documentation: Developers can find a dedicated API reference section that is well organized and structured. They can quickly access the exact information they need. It is a lot more detailed and a lot more standardized than what we previously had, and they don’t have to scroll a bunch of text to access what they need. 
  • More code samples right within the documentation: Code can be easily copied with a copy code button. It is nicely formatted, and the syntax is highlighted according to the language, making it much easier to read.
portal1 How to Build a Flexible Developer Documentation Portal
  • A lot more cross-referencing and links than we used to have: When something is mentioned, it is always linked right there, or you can find the relevant link at the bottom of your page. You can navigate through and gain all the knowledge to accomplish a task without breaks in the thought process.
  • Improved search: You can quickly see search pages and headers within pages. You can quickly jump into specific areas within a page.
portal2 770x299 How to Build a Flexible Developer Documentation Portal
  • Responsive mobile experience: We also optimized all pages for mobile consumption.
  • New and refactored content: Much of what’s been added to the portal is entirely new, plus we refactored a lot of existing content, revalidating and updating it.
  • Broad range of content types: We added a lot of external references to our GitHub account, webinars hosted on Vimeo, and a playground built in conjunction with the developer portal.

Overall, the content now is not just a block of text but part of an ecosystem of knowledge that developers can utilize. Everyone has a different learning process. Some prefer to read, some prefer to see a demo, some like to watch a video while others prefer to just dive right in and learn through experimentation. We are trying to provide different ways to help no matter what a person’s approach to learning is. 

In addition, we also have features that cover:

  • Developer release notes and release notes for the website itself so you can track what has changed
  • More about the DevX team, why we exist, and and how to get in touch with us
  • Ability to reach the playground, blog, and forums from the portal directly

In the future, we are looking at adding a developer-focused blog within the portal directly. We are also continuously working on more video content and documentation for some of the APIs that aren’t covered yet.

How you can improve your own documentation

SP: Lastly, how can an organization decide if building its own developer documentation site ground up is the right approach for them?

MG: Take into consideration these points to decide if this approach is good for you:

  • You need engineering capability to build the website. 
  • You need to make sure that the people who are writing documentation are comfortable with the format. For example, if the people writing documentation in your organization are technical writers who are comfortable with Markdown, then great, use this approach. Or maybe they want to learn, then that is great as well. If they do not want to learn, they may need another platform that is what-you-see-is-what-you-get.
  • Make sure you have the ability in terms of IT to host and maintain the site. If you can’t assure that it will be up globally and available 24/7, then you are going to have problems. Since this is the largest interface with the largest surface area, fault or downtime will have a huge negative impact. 

It all boils down to capacity and need. If your current documentation website works and customers are happy with it, and all you need to do is to enhance and update the content, then do that. Don’t waste your time and money on building something new. But if you have a lot of room for improvement like we did, and you want to use the opportunity to upgrade and overhaul the user experience, then this is a viable solution that should be considered.

build developer documentation portal blog cta banner 770x250 1 How to Build a Flexible Developer Documentation Portal

Shruthi Panicker is a Sr. Technical Product Marketing Manager with Sisense. She focuses on how Sisense can be leveraged to build successful embedded analytics solutions covering Sisense’s embedding and customization capabilities, developer experience initiative and cloud-native architecture. She holds a BS in Computer Science as well as an MBA and has over a decade of experience in the technology world.

Let’s block ads! (Why?)

Blog – Sisense

Read More

Build Your Smart City with TIBCO

September 22, 2020   TIBCO Spotfire
TIBCO SmartCities 696x464 Build Your Smart City with TIBCO

Reading Time: 3 minutes

Smart Cities Are the Future

With the rise of COVID-19, many countries are realizing how a smart city framework could help ease the spread and control of the pandemic. For example, sensors to track contacts and monitor those under quarantine have helped keep the spread under control in places like South Korea. 

So, if a connected, responsive, data-driven city can help improve people’s health and well being – why don’t we have this tech everywhere?  The answer: it’s hard to do.  

Why Are Smart Cities Hard?

First, you need to note that a smart city is a system of systems. For instance, you have assets (let’s say cars or temperature tracking cameras) and those assets need to be able to run updatable software to make them ‘smart’ and then connected to individual systems. All of these individual systems are incredibly complex systems to build and, to make a true smart city, the many individual systems also need to be interconnected, which makes it even harder. 

Plus, remember, there are lots of different applications of these interconnected systems: from economic development and civic engagement to sustainable urban planning or intelligent transportation. You might want to target data-driven public safety or focus on resilient energy and infrastructure. It can be challenging to address all these different use cases.

Thankfully, TIBCO has the capabilities to enable the smart cities of the future, with solutions that can: 

  • Seamlessly connect systems and application sources all the way out to the edge where data is collected on IoT devices. 
  • Unify disparate datasets, both static and real-time, so you have well controlled access to trusted, well-governed data. 
  • Use that data to confidently predict outcomes with real-time data-driven machine learning and Artificial Intelligence (AI)I. 

TIBCO provides all these capabilities and is already powering smart city components around the world. Let’s look at some examples in just one area: intelligent transportation.

Making Intelligent Transportation Possible

In the intelligent transportation realm, TIBCO is helping CargoSmart look at over 5500 vessels using visual analytics and data science capabilities to optimize their journeys to save fuel and manage arrival times. As a result, CargoSmart developed a vessel speed and route monitoring application that analyzes the vessel’s speed and distance against a complex variety of factors. The application has helped ocean carriers reduce fuel consumption by up to 3.5% over the past two years.

Another great example comes from an Australian Capital city that used TIBCO for a quick win pilot to track real-time car park occupancy and notifications as well as car park optimization and parking anomaly detection. These solutions are, more and more, being powered by data science, AI, and Machine Learning that is able to recognize moving objects – such as cars – and then use that to track traffic going into, or out of, different places such as car parks. Staff from the TIBCO office in Sydney, Australia, recently demonstrated just how that might work using a public webcam looking out on the Sydney Harbour Bridge.  

One last example is seen in how Aeroporti di Roma uses TIBCO to integrate and correlate all its information sources, in order to effectively analyze passenger flow & behavior, improve customer experience, and support operational awareness to better mitigate problems. In 2018 and 2019 Rome Airport won EUROPE Best Airport Award in the ‘over 25 million passengers’ category.

Need More?

To learn more about how TIBCO is enabling cities of the future and how these capabilities can help your business, please download the How to Build a Smart City & Smart Nation whitepaper,  watch the “How to Build a Smart City in 40 Minutes (or Less)” on-demand webinar or contact us today.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

How to Build Business Resilience From a Responsive Architecture

September 1, 2020   TIBCO Spotfire
RTI 1200x630 1 696x365 How to Build Business Resilience From a Responsive Architecture

Reading Time: 2 minutes

Today’s markets are shifting rapidly due to several different disruptive forces. So, it begs the question: Is your business built for agility so that it is resilient as well as innovative in times of rapid change?   

Agility is the flexibility to adapt elements of your business very rapidly in response to changing market conditions, such as your processes, supply chain, and even the products and services that you offer. And since your business is likely operating more with software and data than documented processes, agility is largely created through architectural agility. So business agility is now a business imperative, and an agile architecture will lead to an agile business. 

Agile architecture is built on many modern technologies, including cloud-native applications.  These apps take advantage of the elasticity provided by cloud platforms.  New applications should be built with a cloud-native approach, and existing applications can be reengineered into cloud-native apps. Existing apps are often difficult to maintain, scale, and adapt because they are large and complex, thus they are called “monoliths”. A cloud-native architecture built with technologies such as microservices allows for more rapid adaptation and horizontal scaling, and ultimately faster time to market, happier customers, and streamlined operations.  

A focus on architecture alone will not create maximum agility. You also need to align your IT strategy with your business goals, evolve your processes, and adopt best practices from across the industry. For example, in the past, the lines of business always needed to rely on IT. Nowadays the typical business user is much more tech-savvy than in the past. If you want to be agile, you need to realize that not everything can be built and maintained by a small IT team. That said, you want to give your people the tools they need to empower them to build integration or application with ease, such as a no-code or low coder integration platform that allows anyone to create and build integrations. This creates employee synergies that unlock productivity, efficiency, and agility, allowing your business to operate faster. 

Let’s take a look at a few industry examples of where business agility is crucial to success. 

Manufacturing

Manufacturers need to be more agile to adapt operations as well as their products.  This can include product development, supply chains, logistics, and distribution. Business resilience and agility can only be achieved by quickly adapting the digital platform that supports these processes.  

Retail Banking & Finance

Open banking allows banks to extend their services into adjacent markets, which enables new customer-centric business models for innovative value-added services. Responsive architectures can help to transition from closed traditional business models to a collaborative business model where products, services, and data are shared with third-parties and where agility is key.

Retail 

Over the last 6 months, the retail industry has been going through rapid changes. Retailers have had to learn how to continue to generate business when they have far fewer customers in their stores. Modern architectures allow retailers to reach customers in new ways, for example through online promotions and satisfy their needs through different types of operating models, such as online ordering, on-site pickup, and delivery. This results in real-time notifications across multiple applications (logistics, inventory, billing) and inventory management. Lastly, it allows retailers to scale horizontally to support any online holiday season sales or online orders.

Is your business built for agility so that it is resilient as well as innovative in times of rapid change?   Click To Tweet

Watch this webinar with RTInsights to learn how you can boost business resiliency with a responsive application architecture. Also, read this eBook on TIBCO’s Responsive Application Mesh for a more in-depth look.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

What are the ‘7 Steps to Build a Successful Business Case for MDM Programs’?

August 9, 2020   TIBCO Spotfire
Gartner 7 Blog 1 03 1 696x464 What are the ‘7 Steps to Build a Successful Business Case for MDM Programs’?

Reading Time: 2 minutes

Master data management (MDM) program teams often struggle to build business cases. Part of this struggle stems from simply not having the data to back up the value of MDM. In fact, “90% of companies recently surveyed for the Gartner Magic Quadrant for Master Data Management Solutions lack formal metrics to measure the financial value contributed by an MDM solution” (1).

Without formal metrics, organizations are often left with vendor-led “proof of value” programs as validation of technology rather than demonstrations of how the right technology addresses the requirements and metrics, financial and otherwise, demanded by the stakeholders in your organization. This approach is incomplete as it doesn’t look at the technology as a w9hole and may limit the organization’s success in the long term.  

 What are the ‘7 Steps to Build a Successful Business Case for MDM Programs’?
Figure 1: Percentage of organizations that measure the value of their master data (2)

We’d like to address that gap in the enterprise software industry and shift how organizations buy enterprise infrastructure technology by providing clear-eyed, well-documented point-of-views on how technology can meet business requirements while creating financial value. 

Gartner’s 7 Steps to Build a Successful Business Case for MDM Programs 

We believe educating buyers on the value of technology is a critical first step. So, make sure to check out Gartner’s most recent MDM report, “7 Steps to Build a Successful Business Case for MDM Programs,” written by esteemed analysts and MDM experts: Malcolm Hawker, Simon Walker, and Sally Parker (3). This report provides a seven-step process that data and analytics leaders can use to gain and maintain buy-in for MDM from key business stakeholders. It can help you put together a business case that demonstrates value with measurable key performance indicators (KPIs) and outlines a long-term roadmap for MDM as a key part of your analytics strategy. 

Informed and active stakeholder support greatly increases the likelihood of an MDM program succeeding. Download this report to learn more about how to gain support and reach your MDM program goals.

Informed and active stakeholder support greatly increases the likelihood of an MDM program succeeding. Click To Tweet

And when you’re ready, we urge you to reach out to our team. Have that deeper discussion, not just on the technology, but also, on all the ways the technology—and TIBCO’s market-leading MDM capabilities—can address your business needs and create unique value for your organization.      

(1) 2019 Gartner Vendor Survey for the Magic Quadrant of Master Data Management Solutions
(2) This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document.
(3)  Gartner, 7 Steps to Build a Successful Business Case for MDM Programs, Malcolm Hawker, Simon Walker, Sally Parker, 7 April 2020

Let’s block ads! (Why?)

The TIBCO Blog

Read More

How to Build Resilient Supply Chains with a Digital Nervous System

July 20, 2020   TIBCO Spotfire
TIBCO SupplyChainMfg scaled e1594741729437 696x365 How to Build Resilient Supply Chains with a Digital Nervous System

Reading Time: 2 minutes

Supply chains, which increasingly rely on global connections, have had a number of disruptions over the past 12 months, requiring greater resiliency and adaptability.

Buying intentions and stock inventory can change overnight, and many supply chain systems may not be ready for rapid shifts in purchasing patterns. And even the most intelligent companies with the most sophisticated analytics may not be able to keep up without real-time data. 

This ever-changing market landscape has created a need for greater collaboration and increased information flow along the entire supply chain. Lack of cross-function support, limited workforce engagement, and lack of robust technology make it hard for organizations to achieve their supply chain goals. 

Lack of cross-function support, limited workforce engagement, and lack of robust technology make it hard for organizations to achieve their supply chain goals. Click To Tweet

To solve these challenges, your organization needs a smart, real-time, data-driven supply chain management solution that delivers: real-time visibility, continuous data, streaming artificial intelligence, early warnings, risk predictions, advanced analytics, and collaboration. All the elements needed so your company, suppliers, and your entire supply chain can adapt quickly to today’s changing conditions.

That said, think of this intelligent supply chain like a “digital nervous system” that combines leading technologies including: visual analytics, data science, streaming apps, data virtualization, metadata, and integration. All of these tools work together to create a seamless, intelligent supply chain, making full use of all the data generated. 

One example of an intelligent supply chain nervous system made possible by TIBCO is Bayer Crop Sciences. Its approach implemented enterprise-scale data integration, data virtualization, 14,000 visual analytics users, steaming image analytics, and AI and edge computing to drive precision planting and farming. 

This enabled the company to:

  • Share collaborative analytics for added agility and flexibility in precision agriculture use cases
  • Blend image data from drones with traditional sources (soil, irrigation, and fertilization) without moving or altering source data
  • Synchronize and replicate high-scale data integrations between crop profiles and its CRM

Join us for this webinar with RTInsights to learn how you can implement TIBCO Connected Intelligence solutions to build a resilient supply chain with a digital nervous system. Also, read the paper on TIBCO’s Supply Chain Nervous System for a more in-depth look. 

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Using a Public Web Service to Consume SQL Server Build Numbers

June 18, 2020   BI News and Info

During my time as a professional DBA, I have been responsible for quite a few SQL Server instances, all with their own set of characteristics, including different versions and patching levels, the topic of this article.

One of the primary duties that DBAs have is making sure that the SQL Server instances are patched and up to date, as Microsoft makes available all the corresponding Service Packs and/or Cumulative Updates. This responsibility, of course, applies for the on-premises deployments of SQL Server. Microsoft manages most database services for you in Azure, however, you will still be responsible for keeping Azure Virtual Machines up to date.

With this said, how can you make sure that your set of SQL Server instances are patched up to the latest available release by Microsoft? One obvious way is to manually check the current releases of SQL Server and compare that list against what you currently have, making a note of the current status of your environment and pointing out any differences. However, doing this can consume a lot of time, especially if you have quite a handful of instances to check.

I found that I wanted to know if my instances were up to date, but, eventually, I didn’t have enough time on my hands to be constantly checking for the available updates. I previously created and tried a solution that consumed and parsed a website to gather the information about the SQL Server build numbers. However, I decided to put that solution to rest because I realized that I don’t want to depend on the availability of an external site to keep things working fine for me.

After failing at many attempts to find a solution to automate this effort, I decided to build a public service that can surely help any SQL Server DBA fulfil this important duty.

Choosing the technology to build the public service

After several hours of thinking, I chose Azure (a solid decision by the way) by combining two of their “serverless” offerings, to help me reduce the overall costs. This article in no way is a deep dive into the technologies picked, so with that out of the way, let me explain why I picked Azure Serverless Functions and Azure SQL Database Serverless.

One of my first options was to spin a Virtual Machine, install a web server, a database, point a custom domain to the public IP assigned to the Virtual machine, and develop the service. However, by going this route, even if there’s no activity going on in the server, you still have to pay a minimum amount for the storage and virtual network assigned to your Virtual Machine.

With the serverless options, you can get a cost-effective and very convenient solution simply by paying for it when your stuff is really used.

Azure SQL Database Serverless

Nowadays, there’s an offering from Microsoft, for your Database-as-a-Service solution, called serverless. A convenient feature from this option is that, if your database hasn’t been used for a continuous amount of time (1hr is the minimum you can pick, up to 7 days), then it will auto-pause itself and, you guessed it, you will only be charged for the storage assigned to your database. Under normal conditions, Microsoft charges for the storage and compute resources used by your Azure SQL Database.

There is one important detail that should be kept in mind, and it is the fact that if your database is in a paused state, and a request tries to hit the database, then it will require some time (usually it’s several seconds) for it to “wake up” and serve the request. Therefore, there might be times where it seems that your service is slow, but it is very likely that it is just the database “waking up”. You can find more information here.

Azure Serverless Functions

Azure functions are an excellent option for quickly developing microservices, without worrying about the underlying infrastructure that powers them to run; hence the addition of the term serverless. It doesn’t mean that it doesn’t require a server to run behind the scenes. There are different service plans and configurations for your functions, but the convenient part for me is that there are free grants covered by month, and so far, I haven’t spent a single dime in Azure functions.

You can find more information here.

Details and usage of the public service

Before detailing the structure and usage of the service, I would like to express one important fact, and it is that, as of the time of this writing, the usage of this public service is entirely free for the end-user. I am personally financing the resources described in Azure (even if it’s a tiny bit currently) and will continue to do so for the foreseeable future unless something prevents me from doing so.

To consume the service, you have to issue an HTTP request, either through a web browser or programmatically through a script, in order to get the json response with the data. As of this writing, there is no restriction upon who can consume this service; however, this can eventually change if any maliciousness is detected, such as trying to bring the service down.

NOTE: You have to be 100% sure that the machine from where you trigger the request has internet access. It might be an obvious thing, but I have seen cases where the service seems to be failing, and it is just that extremely simple detail.

Here is the structure of the URL:

http://www.sqlserverbuilds.info/api/builds/{version:regex(2019|2017|2016|2014|2012|2008R2|2008|all)}/{kind:regex(first|latest|all)}

As you can see, there are two sections of the URL within curly brackets {}. The first will tell the service about the information that the user is actually targeting:

{ version:regex(2019 | 2017 | 2016 | 2014 | 2012 | 2008R2 | 2008 | all) }

In here you specify, as a parameter, the particular SQL Server version for which you wish to know the released/available build numbers. If you specify all, then all the build numbers I have collected in the database are going to be returned. I have only populated the database with build numbers starting from SQL Server 2008. I know that there are many systems out there still running SQL Server 2005 and below, but I just thought that SQL Server 2008 would be a good starting point; perhaps in a future revision/release of this project, I might add even older versions.

The next part looks like this:

{ kind:regex(first | latest | all) }

In here you specify, as a parameter, how granular you want the information to be returned by the service.

First: tells the service to return only the very first build number for the specified SQL Server version.

Latest: tells the service to return only the latest build number for the specified SQL Server version.

All: tells the service to return all the build numbers found for the specified SQL Server version.

Output Examples:

Here are some examples of the call to the service and the results. Note, as I stated earlier, if you experience either a blank json object as a response or general slowness overall, it means that the database was in a paused state and it is “waking up”.

Retrieving the first build number for SQL Server 2019.

URL: http://www.sqlserverbuilds.info/api/builds/2019/first

Result: [{ “sp” : “RTM”, “build_number” : “15.0.2000.5”, “release_date” : “2019-11-04″ }]

Retrieving the latest build number for SQL Server 2019.

URL: http://www.sqlserverbuilds.info/api/builds/2019/latest

Result: [{ “sp” : “RTM”,“cu” : “CU4″, “build_number” : “15.0.4033.1”, “release_date” : “2020-03-31″ }]

Retrieving all the build numbers for SQL Server 2019.

URL: http://www.sqlserverbuilds.info/api/builds/2019/all

Result: [

{ “sp” : “RTM”, “cu” : “CU4″, “build_number” : “15.0.4033.1”, “release_date” : “2020-03-31″ },

{ “sp” : “RTM”, “cu” : “CU3″, “build_number” : “15.0.4023.6”, “release_date” : “2020-03-12″ },

{ “sp” : “RTM”, “cu”:“CU2″, “build_number” : “15.0.4013.40”, “release_date” : “2020-02-13″ },

{ “sp” : “RTM”, “cu” : “CU1″, “build_number” : “15.0.4003.23”, “release_date” : “2020-01-07″ },

{ “sp” : “RTM”, “extra” : “GDR”, “build_number” : “15.0.2070.41”, “release_date” : “2019-11-04″ },

{ “sp” : “RTM”, “build_number” : “15.0.2000.5”, “release_date” : “2019-11-04″ }

]

Retrieving the first build number of all the SQL Server versions stored in the database.

URL: http://www.sqlserverbuilds.info/api/builds/all/first

Result: [

{ “sp” : “RTM”, “build_number” : “15.0.2000.5”, “release_date” : “2019-11-04″ },

{ “sp” : “RTM”, “build_number” : “14.0.1000.169”, “release_date” : “2017-10-02″ },

{ “sp” : “RTM”, “build_number” : “13.0.1601.5”, “release_date” : “2016-06-01″ },

{ “sp” : “RTM”, “build_number” : “12.0.2000.8”, “release_date” : “2014-04-01″ },

{ “sp” : “RTM”, “build_number” : “11.0.2100.60”, “release_date” : “2012-03-06″ },

{ “sp” : “RTM”, “build_number” : “10.50.1600.1”, “release_date” : “2010-04-21″ },

{ “sp” : “RTM”, “build_number” : “10.0.1600.22”, “release_date” : “2008-08-07″ }

]

Retrieving the latest build number of all the SQL Server versions stored in the database.

URL: http://www.sqlserverbuilds.info/api/builds/all/first

Result: [

{ “sp” : “RTM”, “cu” : “CU4″, “build_number” : “15.0.4033.1”, “release_date” : “2020-03-31″ },

{ “sp” : “RTM”, “cu” : “CU20″, “build_number” : “14.0.3294.2”, “release_date” : “2020-04-07″ },

{ “sp” : “SP2″, “cu” : “CU12″, “build_number” : “13.0.5698.0”, “release_date” : “2020-02-25″ },

{ “sp” : “SP3″, “cu” : “CU4″, “extra” : “CVE”, “build_number” : “12.0.6372.1”, “release_date” : “2020-02-11″ },

{ “sp” : “SP4″, “extra” : “CVE”, “build_number” : “11.0.7493.4”, “release_date” : “2020-02-11″ },

{ “sp” : “SP3″, “extra” : “GDR”, “build_number” : “10.50.6560.0”, “release_date” : “2018-01-06″ },

{ “sp” : “SP4″, “extra” : “GDR”, “build_number” : “10.0.6556.0”, “release_date” : “2018-01-06″ }

]

Structure of the JSON response

Once you get back the JSON response, you’ll need to interpret the information:

[

   {

      “sp” : “RTM”,

      “cu” : “CU4″,

      “build_number” : “15.0.4033.1”,

      “release_date” : “2020-03-31″

   }

]

sp: The Service Pack level of the build number

  • From SQL Server 2017 and up, this will always have “RTM” as Microsoft shifted to Cumulative only releases.

cu: The Cumulative Update level of the build number.

  • When this field doesn’t come in a particular response, it means that the build number is in its base RTM/SP level without its first Cumulative Update.

build_number: The actual build number of the specific release.

release_date: The date when Microsoft release the specific build number to the public.

  • Sometimes, there are rare cases where Microsoft pulls a particular build number from public availability (due to bugs, errors reported). When I find cases like these, I usually pull them from the database as well.

extra: When this field appears in a particular response object, it means that the build number is a special case release, either a General Distribution Release, a Hotfix, or an On-Demand update.

Bonus script to interact with the public service

Since the spirit of this public service is to allow the fellow DBAs to programmatically consume the service, let me leave you a PowerShell script that you can use as a “stepping stone” for you own particular use case.

Something that has been very helpful to me (and it might be to you as well) is the use of this service to store the build numbers information in a central repository that I can keep up-to-date and use it to determine if my list of instances are up-to-date. Of course, you would have to craft that solution and apply some sort of automation to it.

Code

1

2

3

4

5

6

7

8

9

10

11

$ response = Invoke-WebRequest -URI http://www.sqlserverbuilds.info/api/builds/2019/first -UseBasicParsing

$ json = ConvertFrom-Json $ ([String]::new($ response.Content))

#This means that an option that targets multiple build numbers was sent

if($ json.length -gt 1){

    foreach($ item in $ json){

        $ item

    }

}

else{

    $ json

}

Output Examples

Fetching one build number for a particular SQL Server version.

 Using a Public Web Service to Consume SQL Server Build Numbers

Fetching all the build numbers from a particular SQL Server version.

 Using a Public Web Service to Consume SQL Server Build Numbers

Fetching all the build numbers stored in the database.

 Using a Public Web Service to Consume SQL Server Build Numbers

Conclusion

I really hope that this personal initiative can be valuable to any SQL Server DBA out there facing the same situation that I once faced. Keep in mind that there is a chance that you might find errors while attempting to consume the service, and it wouldn’t be that surprising as the version I’m presenting within this article is v1.0.

I personally will be updating the database with every new release that Microsoft makes public, and any ideas, comments, suggestions, complaints will always be welcome in favor of improving this service in any possible way, so feel free to drop a comment and I will try my best to address it.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Hailo partners with Foxconn to build edge device for AI inference

May 13, 2020   Big Data
 Hailo partners with Foxconn to build edge device for AI inference

AI startup Hailo today announced that it’s teaming up with Foxconn and system-on-chip provider Socionext to launch BOXiedge, an edge computing processing solution for video analytics. If the companies’ claims bear out, BOXiedge could deliver “market-leading” energy efficiency for AI inference, benefiting applications like industrial internet of things, smart cities, and smart medical.

BOXiedge is the successor to a mini server Foxconn teamed up with Network Optix to launch in January, which confusingly shares the same name. Unlike the previous server, this new BOXiedge can perform image classification, detection, pose estimation, and other tasks on footage from up to 20 cameras simultaneously thanks to SocioNext’s SynQuacer SC2AA chip and Hailo’s Hailo-8 processor, which features an architecture that consumes less power than rival chips while incorporating memory, software control, and a heat-dissipating design.

Under the hood of the Hailo-8, resources including memory, control, and compute blocks are distributed throughout the whole of the chip, and Hailo’s software — which supports Google’s TensorFlow machine learning framework and ONNX (an open format built to represent machine learning models) — analyzes the requirements of each AI algorithm and allocates the appropriate modules.

Hailo-8 is capable of 26 tera-operations per second (TOPs), which works out to 2.8 TOPs per watt. In a recent benchmark test conducted by Hailo, the Hailo-8 outperformed hardware like Nvidia’s Xavier AGX on several AI semantic segmentation and object detection benchmarks, including ResNet-50. At an image resolution of 224 x 224 pixels per inch, it processed 672 frames per second compared with the Xavier AGX’s 656 frames and sucked down only 1.67 watts (equating to 2.8 TOPs per watt) versus the Nvidia chip’s 32 watts (0.14 TOPs per watt).

VB Transform 2020 Online – July 15-17: Join leading AI executives at the AI event of the year. Register today and save 30% off digital access passes.

The edge AI hardware market is anticipated to be worth $ 1.15 billion by 2023, and Hailo — which raised $ 60 million in March — is hoping to beat rivals to the punch. Startups AIStorm, Esperanto Technologies, Quadric, Graphcore, Xnor, and Flex Logix are developing chips customized for AI workloads. Mobileye, the Tel Aviv company Intel acquired for $ 15.3 billion in March 2017, offers a computer vision processing solution for autonomous vehicles in its EyeQ product line. Baidu in July unveiled Kunlun, a chip for edge computing on devices and in the cloud via datacenters. And Chinese retail giant Alibaba launched an AI inference chip for autonomous driving, smart cities, and logistics verticals in the second half of 2019.

Foxconn is one of Hailo’s first publicly disclosed customers after NEC and ABB Technology. Previously, the startup said it’s working to build Hailo-8 into products from OEMs and tier-1 automotive companies in fields such as advanced driver-assistance systems (ADAS) and industries like robotics, smart cities, and smart homes.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • Why the open banking movement is gaining momentum (VB Live)
    • OUR MAGNIFICENT UNIVERSE
    • What to Avoid When Creating an Intranet
    • Is Your Business Ready for the New Generation of Analytics?
    • Contest for control over the semantic layer for analytics begins in earnest
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited