Category Archives: Data Mining

TIBCO Announces Support for Apache Kafka and MQTT via Eclipse Mosquito

blog img no text TIBCO Announces Support for Apache Kafka and MQTT via Eclipse Mosquito

TIBCO now includes commercial support and services for Apache Kafka® and Eclipse Mosquitto™, as part of TIBCO® Messaging. Any businesses using these open source projects can now take advantage of enterprise-class, 24×7, follow-the-sun support for their messaging infrastructure.

The Evolution of TIBCO Messaging

TIBCO® Messaging continues to evolve and adapt to a growing need to share data between an ever-increasing number and variety of applications. Messaging initially emerged out of the need to increase the level of abstraction and decrease the dependencies shared between applications.

Today, as the messaging environment continues to mature, developers look to public and private cloud, to containers, to devices and an expanding number of use cases like log aggregation, machine generated data, and IoT data collection. Each of these use cases brings unique, and often, challenging requirements.

TIBCO recognizes the need to not only develop new and innovative messaging solutions, but also to enable anytime, anywhere messaging. With the addition of open-source software (OSS) support for Apache Kafka, and for MQTT via the Eclipse Mosquitto project, TIBCO Messaging is advancing the idea that different types of messaging, no matter the flavor, must be done efficiently, quickly, and reliably.

Customers now have access to the most comprehensive messaging portfolio in one seamlessly integrated platform, with a simple subscription model. The TIBCO Messaging solution covers all scenarios, including fully distributed, high-performance, peer-to-peer messaging, certified JMS messaging, web and mobile messaging, streaming messaging via Apache Kafka, and IoT messaging via MQTT and Eclipse Mosquitto. All these capabilities are backed by TIBCO’s industry-leading messaging expertise, innovation, and enterprise-class, 24×7, “follow-the-sun” support.

TIBCO and Open Source

This announcement further underscores TIBCO’s continued efforts in the open-source community. TIBCO is already a publisher of multiple open-source solutions, such as TIBCO Jaspersoft® (for embedded analytics), Project Flogo® (for edge microservices), and Project Mashling (for event-driven APIs). This announcement demonstrates how TIBCO is supporting popular open-source projects, to help increase awareness, support for, and usage of these projects within large enterprises across the globe. TIBCO’s messaging developers will contribute to the Apache Kafka and Eclipse Mosquitto projects over time, as they work with customers and community members.

Integrating Apache Kafka with the TIBCO Ecosystem

Apache Kafka has been growing in popularity. While often used as a method of log aggregation, Apache Kafka is making new inroads in other areas such as streaming, distributed systems, and commit logs. With the ever-present requirement to have integrated applications which share data, TIBCO is taking the next step to seamlessly integrate Kafka into the TIBCO ecosystem.

Prior to the availability of TIBCO® Messaging – Apache Kafka Distribution (in May 2018), if a developer wanted to make data published to Kafka available directly to applications not based on Kafka, they would build a custom bridge (such as a Flogo flow or TIBCO BusinessWorks process) that would take in Kafka messages and publish via another messaging service, for example, TIBCO Enterprise Message Serviceor TIBCO FTL®.

With the introduction of TIBCOMessaging – Apache Kafka Distribution, developers can now seamlessly bridge Apache Kafka into the TIBCO FTL platform. Through the TIBCO FTL platform, Kafka message streams can be extended into other messaging applications, such as web and mobile via TIBCO eFTL, IoT through an MQTT broker, or JMS applications with TIBCO Enterprise Message Service, to name a few.

Integrating IoT with the TIBCO Messaging Ecosystem via MQTT

In the IoT space, there are several unique design considerations that must be taken into account. In addition to a small footprint, messaging solutions must handle intermittent network connectivity, handle billions of clients, and must limit network bandwidth utilization. The MQTT protocol was developed to address these unique types of requirements.

At the same time, on-premises, cloud, and container messaging services still have requirements like guaranteed delivery, dynamic formats, disaster recovery, low latency and high throughput. These requirements would potentially add unnecessary weight to protocols designed to fit the small footprint devices in the IoT space.

In order to bring these environments together and still respect the unique nature of each messaging landscape, TIBCO Messaging will include TIBCO® Messaging – Eclipse Mosquito Distribution, and an upcoming (May 2018) bridge that can “speak” both languages. An enterprise will be able to connect the lightweight MQTT protocol used by connected devices to a world-class, robust messaging protocol: TIBCO FTL. Using this bridge, developers will be able to bring the world of connected devices into the entire TIBCO ecosystem including Flogo, TIBCO BusinessWorks, TIBCO BusinessEvents®, and other TIBCO products.

Will TIBCO’s Apache Kafka and Eclipse Mosquitto distributions be different than the standard ones?

TIBCO will support the upstream Apache Kafka and Eclipse Mosquitto distributions. TIBCO will also provide an optimized distribution of Apache Kafka (in May 2018) with the removal of deprecated features.

What about message formatting or Avro?

Apache Avro is a serialization framework that is commonly used with Apache projects and by users of Apache Kafka. It can provide a convenient way to define schemas and format your message data. TIBCO will be providing a schema repository (in May 2018) which will allow client applications to create and manage schemas and message formats using Avro. The schema repository will seamlessly integrate into an existing Apache Kafka project, and allow users to make use of Apache Avro for message schemas.

How will support be provided?

Apache Kafka and Eclipse Mosquitto will be supported by TIBCO’s 24×7 support, by a team with over 25 years of experience providing support for messaging and operations in 30 countries around the world. It also means that customers who currently get support for TIBCO Messaging also get support for the entire stack, including support for Apache Kafka and Eclipse Mosquitto. The messaging support staff are established and trusted experts in the field, dealing with issues ranging from the low-latency messaging solutions that power the world’s financial markets, to the high volumes of distributed messaging solutions that run the world’s largest retail operations, and everything in between.

Where can I learn more?

For further information on this release, please check out this webinar.

To learn more about TIBCO Messaging, click here.

Let’s block ads! (Why?)

The TIBCO Blog

TIBCO Boosts Cloud-Native Offerings with New Support for Cloud Foundry Platform

Screen Shot 2018 04 19 at 12.31.47 PM TIBCO Boosts Cloud Native Offerings with New Support for Cloud Foundry Platform

TIBCO BusinessWorks Container Edition now supports Cloud Foundry’s Container Runtime and Pivotal Cloud Foundry 2.0 and higher.

With BusinessWorks Container Edition now supporting Cloud Foundry’s Container Runtime and Pivotal Cloud Foundry 2.0 and higher, TIBCO is proud to announce its strengthened partnership and continued support for Cloud Foundry.

“TIBCO and the Cloud Foundry Foundation have partnered since the initial launch of BusinessWorks Container Edition, providing highly scalable and enterprise-ready applications necessary for digital transformation,” said Abby Kearns, executive director, Cloud Foundry Foundation. “We’re excited to see our collaboration grow as the market continues to evolve.”

Cloud Foundry Container Runtime gives large enterprises the opportunity to run containers at scale and in production.

Cloud Foundry realized that Kubernetes was winning the container management race, so they worked hard to create Cloud Foundry Container Runtime, a solution that incorporates Kubernetes, to help enterprises manage their containers more efficiently.

On the heels of this announcement, TIBCO has also extended support for using Project Flogo with Project RIFF, an open-source project that packages functions as containers, connects them with event brokers and allows functions to scale with events. This allows developers to take the flows built in Project Flogo and deploy them as functions in Project Riff.

Both of these announcements allow customers to leverage the best combination of cloud tools available on the market today to reach their business goals in a more efficient, effective manner.

To learn more, please read the press release.

Let’s block ads! (Why?)

The TIBCO Blog

MAIF Improves Knowledge and Customer Service with Spotfire

Histoire logo MAIF e1524057027232 MAIF Improves Knowledge and Customer Service with Spotfire

Two years ago, MAIF, the fifth largest insurer in France, had isolated applications and information that prevented it from providing great service. MAIF solved these problems with the TIBCO integration platform, messaging middleware, and API management software—and won itself a TIBCO Trailblazer award in the offing.

More recently, the company found it need an analytics solution capable of producing visual representations to simplify data exploration, and once again turned to TIBCO.

“You can imagine that an organization of our size has long used business intelligence tools. The problem with those solutions is their technicality. For reporting and tables, these solutions are fine, but when the objective is visual analytics without imposing technical prerequisites, difficulties arise,” says Stéphane Renoux, project manager in MAIF’s information systems department.

Once again, the results achieved were outstanding, including enthusiastic daily users, clear understanding of portfolios and agencies, better alignment of portfolios and locations, and better claims service. Read this latest MAIF story to learn the extent of its achievements and the key TIBCO Spotfire capabilities its using to make it happen.

Let’s block ads! (Why?)

The TIBCO Blog

4 Revenue Benefits of Embedded Analytics for Application Vendors

While offering clear advantages to end-users, benefits of embedded analytics can also deliver scalability and revenue to the companies providing analytics to their customers.

Often overlooked, these benefits can make a big impact on your bottom line, positively influencing customer decision-making, encouraging new business, and even taking advantage of previously untapped monetization opportunities.

Let’s examine 4 main advantages of embedding an analytics solution in your B2B application.

EmbeedScale 770X250 770x250 4 Revenue Benefits of Embedded Analytics for Application Vendors

1. Increased Win Rate

Analytics has become a compulsory functionality in today’s B2B market. This means providing an analytics solution as part of your product or service that lacks one can immediately increase customer satisfaction, market positioning and adoption.

In addition, upgrading an application’s analytics solution also presents revenue opportunities. It’s an ideal way to keep current customers’ attention by offering new capabilities from an existing offering. It can also pique the interest of net-new customers, potentially securing more business overall.

2. Decreased Churn Rate

Customers will switch solutions when they aren’t getting the functionality they need. Because data analytics is associated with a competitive advantage, today, many decisions to switch solutions are driven by the need for more information and better analytics. By adding or upgrading your offering’s Analytics and BI, existing customers benefit from new capabilities that keep them from seeking a different solution.

On top of this, showing your clients that you’re constantly working to improve your product is impressive. You can ensure loyalty longevity from your current customer base by offering them not just new features, but functional ones that will make their lives easier.

3. Expanded Product Licensing

Embedding analytics doesn’t have to be limited to a single use case or product. Similar to the above point, adding or upgrading analytics functionality can grow your application, product or service’s potential user base. The more departments, teams, or business units that can utilize and realize value from your application, the bigger increase you’ll see in user or product licenses from new and existing customers.

4. Feature Monetization

Giving users the ability to customize your application with additional “pay to play” modules (outside of the main offering) can be an excellent way to maximize the flexibility and value of your application, product, or service. Offering an analytics module can be a lucrative addition to your customization portfolio due to the sharp market demand for analytics tools. The additional information supplied by the added analytics can be a line item for additional revenue.

Ready to See The Benefits of Embedding Analytics Into Your Offering?

The revenue opportunities analytics and BI present are real. But what does this mean for your company? How can you be sure that you’re choosing the analytics solution that can expand and grow as you do? Choosing the right analytics solution that dovetails seamlessly with your application, product or service is, of course, an important and strategic decision.

EmbeedScale 770X250 770x250 4 Revenue Benefits of Embedded Analytics for Application Vendors

Categories: BI Best Practices

Let’s block ads! (Why?)

Blog – Sisense

Middleware Modernization with TIBCO Connected Intelligence

image Middleware Modernization with TIBCO Connected Intelligence

The ability to rapidly connect applications and deliver data-driven insights at scale and at the right time is the key to digital success. The increase in the number and variety of endpoints, expectations around the faster delivery of integration solutions, and the need to support rapid innovation is forcing customers to extend and modernize their integration platform.

Why modernize your middleware and why now? As numbers indicate, cloud is already mainstream, and cloud adoption in all of its forms is going to pick up the pace in the next three years. A number of applications that we integrate with are going to be on the cloud, and the middleware platform has to support integrating these applications. Increased adoption of cloud also means a drastic reduction in the time it takes to implement large and complex enterprise applications, and middleware services development has to keep pace with these shrunken timelines. Additionally, modernizing middleware will support newer business models and do all of these at internet scale.

Organizations need a number of capabilities to respond effectively to drivers of middleware modernization. They need to be equipt for quick turnaround times, being able to rapidly deliver integration changes in as little as four to five months. With new requirements for agility and differentiation, organizations need to be built for change and emphasis it without losing consistency. The key is to find a balance between consistency and agility.

Another key capability is being able to pull together multiple related integration capabilities to drive innovation. To do this, organizations need to compile data from multiple systems, add business rules, and then expose them as an API. With a diverse set of technologies and different teams managing the delivery, it becomes very difficult to get these things done in any reasonable timeframe. Thus, organizations need a platform that interconnects everything.

With use cases such as personalized offers, detecting fraud, predicting machine failures, and keeping an eye on your packages, processing the increasing volume and variety of data, and events at scale as it comes in is the key for your organization to being able to deliver insights in real time. As a result, there is a need for a solution that allows organizations to experiments and incrementally adopt and scale what works for them.

But how do they get there? Transformation is required in four areas:

  • Technology and architecture
  • Platform and supporting services
  • Organization structures and delivery models
  • Processes and tools

The TIBCO Connected Intelligence suite provides a hybrid and pervasive integration platform that supports all of the key technology capabilities required for a modern middleware platform.  This combined with a modernization methodology that addresses the requirements needed for change is key to scalable, risk-free transformation.

Register for our joint webinar with Wipro to learn more about how you can implement modernized middleware for digital success.

Let’s block ads! (Why?)

The TIBCO Blog

On the Edge of More Accessible IoT Innovation

iStock 875478016 e1523394816577 On the Edge of More Accessible IoT Innovation

We’re all well-versed with the narrative that pits the cloud and edge computing against one another as competing entities.

Both have jostled for the top status as IT’s core disruptor and been positioned as a stark choice to make, depending on a business’s priorities and capabilities. Yet this ‘either/or’ conundrum is a myth worth dispelling; they are entirely different concepts. The edge — that physical space that draws computing and intelligence ever closer to the data source — becomes a delivery mechanism for the disconnected elements of the cloud. As such, they can work in synergy, rather than as replacements, making an effective hybrid that combines the edge’s agility with the sheer processing power of the central cloud. It’s why both environments are deployment options for the new breed of developers creating smarter, event-driven microservices for faster, more flexible app development.

While it was predicted that connectivity costs would reduce to such an extent that locating system intelligence in the central cloud would become the preferred option, these predictions have not come to fruition. Instead, we have seen the gradual migration of IoT-created data to the edge and with it, the natural progression of enhanced connectivity and functionality. Indeed, intelligence at the very edge of the network is not only more accessible but captured in real time, in its purest form and freshest state. This makes it most valuable for informing immediate and accurate operational decisions.

The benefits rumble on; with the compute happening directly on device, bottlenecks caused by multiple devices communicating back to a centralized core network are consigned to the past. Furthermore, security risks are minimized, with the time that data is spent in transit and vulnerable to attack significantly slashed. When analytics are added to the mix, things become more interesting, as subsets of data can be collated with analysis localized for enhanced decision making.

While the case for the edge has always been compelling, not everything has traditionally thrived there. Today’s machine learning algorithms, for example, and their requirement for huge volumes of data and computing power, have long relied on the cloud to do the heavy lifting. Yet as artificial intelligence becomes a more mainstream reality, informing daily routines from smart cars to digital personal assistants, things have changed. Attention has now turned to how it can better deliver on the network periphery and the space closer to the mobile phones, computers and other devices where the applications which commonly harness this technology run.

We have already seen the benefits play out in the smart home area. Here, deep learning capabilities at the edge of the network inform the nuances and intuitive responses of IoT-enabled digital tools that integrate and interact to provide insight as situations change. They can then feedback real-time contextual information to the homeowner or, if intruders appear, to a professional monitoring resource.

This was just the start; bringing machine learning capabilities to the edge, on device, with no connectivity requirements, and simplifying the long-standing IoT integration challenges, has wider ramifications for a host of industries and applications beyond the consumer space. Solutions that can respond to events in milliseconds represent the most cutting edge of innovation in this space, unlocking ever greater value in territories as diverse as industrial settings to the medical sector.

Here, real-time information at its most accessible will drive intelligent diagnostic capabilities on medical devices and harness machine learning to make all manner of predictions such as the patients most at risk from a hospital infection or most likely to be readmitted after discharge. At this stage, we are not privy to the full potential of AI’s role in this environment. However, a future where a medical facility can offer patients the option of receiving online medical advice from an artificial intelligence software programme is on the horizon, and promises improvements from speed and efficiency, to patient care and cost savings. Equally, strides are being made in the context of industrial settings, where data must flow between a myriad of sensors, devices, assets and machinery in the field, often in unstructured or challenging and remote conditions. Detecting anomalies at the edge device offers the kind of agility needed for predictive monitoring and mission-critical decisions with the potential to save millions, in terms of addressing equipment failures before they do damage.

Crucially, open source projects that simplify the development and deployment of microservices and IoT applications become the bedrock of this innovation. By enabling autonomous device action for a smarter edge and an era of more accessible IoT development, the potential is limitless.

Let’s block ads! (Why?)

The TIBCO Blog

TIBCO Connected Intelligence for Telco Recertified by TM Forum

Screen Shot 2018 04 05 at 2.55.33 PM e1522961782992 TIBCO Connected Intelligence for Telco Recertified by TM Forum

The leading telecom industry consortium TM Forum has recertified the TIBCO Fulfillment Orchestration Suite solution for its conformance with Frameworx 17.0, the latest version of their entity data model (Information Framework or SID), and process model (Business Process Framework or eTOM) for the telecom industry.

Frameworx contains the continually-updated best practices that vendors and operating companies seek to implement in their products and solution implementations. TIBCO Fulfillment Orchestration Suite is now listed as conformant on Frameworx Certified Products, Solutions & Implementations page.

The TIBCO Fulfillment Orchestration Suite is a comprehensive set of products for accelerating concept-to-cash cycle for multi-play communications service providers (CSPs), and media and entertainment distribution companies. It is the only solution that allows the flexible definition of the fulfillment process components along with offer creation in your master catalog, leading to the efficient and accurate fulfillment of orders. It enables CSPs and media companies to define new products and service offerings along with associated fulfillment rules and processes and automate delivery from order capture to network service activation.

Specifically for the telecom and media market, TIBCO’s model-driven process orchestration allows telecom carriers and media distributors to digitally transform both the Business Support System (BSS) and the Operations Support System (OSS).

TIBCO Fulfillment Catalog enables carriers to model customer-facing product bundles and offers, and their associated technical services and fulfillment process components. In runtime, offers and bundles are priced dynamically by an in-memory offer and price engine. Once orders are validated and placed, TIBCO Fulfillment Order Management dynamically generates fulfillment plans (instead of using predefined workflows) and executes them as process component microservices, e.g., using TIBCO BusinessWorks. Concurrently, TIBCO Spotfire generates quick, advanced data visualizations to help business and operations people see the progress of customer orders from capture to fulfillment.

With the TIBCO Connected Intelligence platform, customers are able to interconnect data and devices and augment their intelligence through analytical insights.The full stack of TIBCO Connected Intelligence products for the telecom and media market helps companies accelerate the process of bringing concepts to market, and reducing the time required for converting orders to cash.

TIBCO’s telecom and media customers include regional and global leaders in Europe, Africa, and Asia, digital transformation leaders such as T-Mobile, and several cable operators in the United States.

Read the full certification report here.

Let’s block ads! (Why?)

The TIBCO Blog

Why People Power is Leading the Charge in Digital Security

iStock 482112104 e1522958551701 Why People Power is Leading the Charge in Digital Security

Digital transformation remains reliant on empowered users to come to fruition.

History shows us how implementing technology alone inevitably falls short; meaningful change demands a broader cultural shift. A shift that sees every strand of an organization buying into the innovation and taking responsibility, rather than being just passive recipients of a digital framework imposed upon them.

We see how this plays out to transformational effect when it comes to the use of analytics. Specifically, how more accessible methods of data analysis — notably via easy-to-read visual representations — enables more people to answer key business questions and make decisions to solve their own issues as they see fit. Imbuing a wider sense of ownership drives data democratization, and in turn, greater efficiency and productivity benefits for added value to the bottom-line.

Not surprisingly, the business world is waking up to the merits of applying a similar ethos and approach to digital security. In a fast-paced, continually evolving digital environment, where the threat level rises exponentially with the rise of ever more sophisticated solutions, a more people-centric approach to security is becoming a logical, if overdue, progression.

Businesses now operate against a varied backdrop of big data, cloud, mobile and DevOps, and within a wider ecosystem of external partners and collaborations, which bring additional challenges from a security standpoint. As such, they need to rethink their approach to this fundamental issue. A sole focus on prevention infrastructure, by simply fortifying the boundaries with the traditional reliance on firewalls, can no longer handle these complexities and protect against internal threats once the attacker is inside the network.

In the digital era, we need a shift in mindset to something more sophisticated — something driven by greater trust in the user, which sees them empowered to take responsibility for security throughout the software development cycle. In tandem with detection technologies, which focus on monitoring, pattern matching, and behavioral analysis, the result is the same kind of real-time responsiveness we see thrive in so many strands of the digital operation now used for enhanced stability and resilience.

It’s an approach encapsulated by Gartner’s Continuous adaptive risk and trust assessment (CARTA). This provides a blueprint for how the agility of continual monitoring is now the bedrock of risk management, better aligned with the speed of digital business and a must for staying competitive.

If we apply this to the microservices arena — rightly heralded as the architecture powering the next generation of digital environments — we can see how their inherent flexibility lends itself perfectly to this approach. The modular nature is well-placed to employ service-specific security and monitor configurations. Introduce visual analytics, and we have a potent weapon for identifying anomalies in behavioral patterns that suggest suspicious activity, such as rogue IDs.

In a similar vein, this traction marks the evolution of DevOps. As a process already renowned for giving teams more responsibility for a project, we are seeing this morph into a new portmanteau, DevSecOps. This reflects how security is now afforded equal status with the creation and deployment as an intrinsic part of the development process. By integrating security measures, such as tooling and automation earlier in this cycle, the upshot is enhanced encryption and a framework that enables users to set and receive alerts to track and manage any API security threat. Furthermore, developers are in a better position to identify and address any shortfalls in their code and resolve issues themselves sooner, with less intervention from security teams, providing a more efficient, productive and cost-effective way of working.

Perhaps the final element in this new wave of digital security is to incorporate deception technologies, such as adaptive honeypots. These fake IT assets that can be created and deployed to lure in a would-be hacker and thwart their efforts before the damage is done. Having recently undergone a machine-learning infused makeover, the traditional static iterations that relied on security experts for their configuration are making way for a new breed. This new and improved breed now has the capability to adapt their form after deployment and is better equipped to block suspicious activity.

More adaption and agility, combined with people-centric ownership, undoubtedly hits the sweet spot for digital security.

Let’s block ads! (Why?)

The TIBCO Blog

Healthcare Dashboards: Examples of Visualizing Key Metrics & KPIs

The world of healthcare analytics is vast and can encompass a wide variety of organizations and use cases: from hospitals to medical equipment manufacturers, emergency rooms to intensive care units. And while some of the dashboard metrics tracked by healthcare organizations can be fairly similar to the ones monitored in other industries – such as finance or marketing – the use of business intelligence in hospitals presents a unique set of potential insights that can help physicians save lives by providing more effective and resourceful care to patients.

This article will examine a number of ways in which visualizing healthcare data can help physicians and management gain a better understanding of going-ons within hospitals, and suggest ways to visualize commonly tracked metrics. But first, let’s understand where this data is coming from.

Common Data Sources in Healthcare

  • Electronic Medical Records (EMR) – these are essentially a digital version of the patient’s paper chart, used by clinicians to monitor the patient’s condition, treatments he or she is due for, etc. These are usually kept within the bounds of the facility in which the patient is being treated.
  • Electronic Health Records(EHR) – a broader set of digital records pertaining to the patient’s overall health, including information regarding previous treatment administered by other healthcare providers, specialists, laboratory tests and more. These would typically move with the patient and be shared by various providers.
  • Specific departmental data – gathered by specific divisions or units within the healthcare organization.
  • Administrative data – collected in Healthcare Management Systems (HMS) and looks at the hospital’s overall operations. Would typically be used by a hospital’s senior managerial staff and may include information regarding matters such as resource utilization and human resources.
  • Financial data – often stored in proprietary financial management systems for larger organizations.

As you can see, many healthcare providers often find themselves working with many disparate data sources. However, there can often be unique benefits in connecting data stored in these various sources to find correlations between them. Consolidating the data can be done in an enterprise data warehouse, which is a project best undertaken by heavily staffed IT departments.

Examples of Data Visualization in Healthcare

Once you’ve gathered all the required data and undergone the prerequisite data modeling steps, you can start looking at effectively monitoring key hospital analytics metrics and thinking of insightful ways to visualize them in a healthcare dashboard. Here are a few healthcare analytics examples, with the disclaimer that these are by no means the only things a hospital would generally be looking at, nor necessarily the most crucial ones.

For the purposes of this article we’ve used sample data. You can click on any image to enlarge.

Cost of Admission by Department

cost of admissions bar chart 770x346 Healthcare Dashboards: Examples of Visualizing Key Metrics & KPIs

This is a very simple visualization, but nevertheless one that can help hospitals understand how their financial resources are being utilized. By using a bar chart we immediately provide additional information that might have been more difficult to notice in tabular format – such as shifts in the relative costs between departments, as well as peaks that could indicate an issue that needs to be addressed, or at least further investigated.

A different way of visualizing the same data would be a line chart:

cost of admissions line chart 770x342 Healthcare Dashboards: Examples of Visualizing Key Metrics & KPIs

This visualization gives us a clearer idea of trends and outliers, and some people might find it more intuitive to examine the data regarding to a specific department in this format – the significant information becomes more apparent immediately. However, this is largely dependant on what the viewer’s emphasis is on when examining the data.

Another common way to look at the same data would be via the following visualization, which gives the exact revenue figures and a very clear idea of each department’s costs on an annual basis:

cost of admissions phased bar chart Healthcare Dashboards: Examples of Visualizing Key Metrics & KPIs

As we’ve mentioned before, an effective dashboard reveals detail on demand. This means that after providing a high-level KPI overview, you might want to give the dashboard viewer the ability to drill into the data – in this case, the admission costs of the various units within the operating rooms. We chose a line chart as it gives us an immediate indication of highs and lows in admission costs:

line chart breakdown of admission costs 770x214 Healthcare Dashboards: Examples of Visualizing Key Metrics & KPIs

ER Admissions and Length of Stay

This visualization gives us a single-glance view at data from several different sources. In our sample dataset, we had to join data from admissions, divisions, and ER tables. Combining these datasets gives us a clearer idea of hospital resource utilization by examining the amount of patients being admitted to the emergency room and the average time these patients spend at the hospital. This sets the way for further investigation into peaks, trends, and patterns.

average stay in days data viz 770x349 Healthcare Dashboards: Examples of Visualizing Key Metrics & KPIs

Leading Diagnoses by Number of Patients, Cost and Stay

Here we’ve kept the data in tabular form. However, by combining financial and administrative data with departmental records, we gain the ability to quickly get answers to specific questions which can shed further insight into the various treatments being administered and how these affect hospital finances and room availability. Applying filters will enable us to examine specific dimensions such as region, time or facility.

Hospital Donations

If your organization bases its budget around donations, like so many do, it’s important to track trends in order to understand how to plan for the year ahead. A donations dashboard can help you to find ways to increase engagement of donors and ensure financial stability. If donated amounts are different from what you expected them to be or change dramatically you can analyze the retention level of donors and find ways to engage more.

donations Healthcare Dashboards: Examples of Visualizing Key Metrics & KPIs

Let’s block ads! (Why?)

Blog – Sisense

Enterprise Blockchain: Consensus Algorithms

iStock 896355786 e1521476445414 Enterprise Blockchain: Consensus Algorithms

Achieving consensus can be difficult.  Whether it is planning a family vacation, or solving the Star Wars vs. Star Trek debate, reaching agreement across a number of (potentially) distributed individuals can be tough.  The same is also true in blockchain, as the lack of central authority and involvement of anonymous participants (especially with public blockchains like Bitcoin) introduces various complexities when trying to determine if transactions are valid, and to get the network to an agreed-upon state.

In the blockchain world, the techniques utilized to achieve agreement and make the network harder to attack are typically referred to as consensus algorithms.  Various algorithms exist, and debates continue as to which algorithm is best, even when one also considers permissioned or private blockchains (where participants are typically known, and the risk of malicious behavior is usually smaller).  Through this blog, we will explore a few of the common algorithms and approaches, without focusing on a single blockchain technology stack (although certain algorithms may only be implemented by one or two technologies) or deployment topology.

No conversation around consensus protocols would be complete, of course, without first mentioning BFT (Byzantine Fault Tolerance).  Many articles exist which describe the basic premise behind this concept, which is typically described as and related to the Byzantine Generals problem.  Practical Byzantine Fault Tolerance (PBFT / Hyperledger Fabric), Federated Byzantine Agreement (FBA / Stellar), and Delegated Byzantine Fault Tolerance (dBFT / Neo) are all protocols that have been introduced in an attempt to achieve consensus in this distributed, non-centralized world.  Research in this area has subsequently led to various other related approaches, with the overall goal being to achieve consensus in an environment where machines may fail at any time or behave maliciously.

Following BFT, the first algorithm to mention is referred to as Proof of Work (PoW).  Probably made most popular by the Bitcoin network, proof of work essentially involves a mathematical “guessing game” where miners utilize specialized hardware to derive a “nonce” via trial-and-error.  Once this is value is determined, the winning miner adds its block to the network, and the process starts again across all miners with a new block and transactions.  Changing existing blocks becomes very difficult as the ledger grows longer, as it would be computationally very expensive to re-compute the subsequent blocks at a rate faster than the rest of the network.  The complexity of this “guessing game” is also adjusted by the network such that new blocks are mined approximately once every 10 minutes (in the case of Bitcoin).

This process, however, is argued to be computationally expensive and “wasteful”, especially when it comes to the amount of electricity required to operate the mining pools and specialized hardware (ASIC).  This is one of the reasons why China has (until recently) dominated the location of these mining pools, as low-cost electricity was readily available (regulation changes, however, may push these mining pools out of China).  Many studies and estimates have been made to look at the amount of power required across the globe by bitcoin mining (e.g. popular reports by Digiconomist or a recent report by Morgan Stanley, which estimates that Bitcoin’s power demand will equal the energy consumed in a year by the entire country of Argentina), but it is very hard to validate the exact number.  Regardless, it is obvious that this approach is not always the best solution, and thus there is a large amount of research being conducted to see how the security of a blockchain network (especially in a public context) can be maintained with perhaps a more efficient protocol.

A second set of approaches are typically categorized as Proof of Stake (PoS).  There are many variations that fall within this category, and the movement of the Ethereum network towards one of these variations (often referred to as “Casper”) has also made this approach fairly well known.  With PoS, blocks are not just created by miners — the originators or leaders of block creation are selected based on the size of the stake they have in the network.  This reduces energy consumption as the computational demands associated with PoW are removed, but there is still a risk that computational power may be used to bias the leader election process.  Cardano, which introduced a PoS algorithm called Ouroboros claims, however, to have the first PoS-based protocol that is “provably secure”, and shows some promise.

Proof of Spacetime (PoSt) is in some ways similar to PoW in that mining power is proportional to computational resources.  However, in this case, the computational resource is not ASIC or GPU processing power, but active storage (see Filecoin as an example).  PoSt may be used to determine if certain data is being stored for a period of time, and this approach may be combined with Proof of Replication (PoRep) to determine if data has been replicated to unique storage.  PoSt and PoRep utilize expensive resources in this context (storage), but the resources are not “wasted” as in PoW.

Tangle, part of the IOTA IoT-focused “ledger of things” is a “blockless distributed ledger” that utilizes a directed acyclic graph (called the “tangle”) for storing transactions.  Consensus in this model is quite different, as the goal of this approach is to enable IoT devices to communicate and “pay” one another (via micropayments) for capability or services.  Since these payments may be quite small, having onerous transaction fees does not make sense, and thus the goal of the protocol is to allow for the validation of transactions without having to pay transaction fees to miners.  To achieve consensus in Tangle, a participant must approve two other transactions before they can send a transaction.  This approval is done via a type of PoW algorithm (not as computationally expensive or time consuming as with Bitcoin), and approved transactions are forwarded over the network where they may be approved by other participants.  This process continues until the transaction is determined to be fully confirmed.  This process can (in theory) happen quickly as the number of transactions grows and can greatly reduce the costs associated with achieving transaction consensus.

At this point, we will stop while we are ahead, or this blog could turn out to be many pages!  As one can see, the area of consensus is a crowded world.  As research continues, expect to see more algorithms that attempt to solve the problem of consensus, while achieving higher levels of performance and scalability.  Also note that we are also seeing “pluggable” consensus algorithm capabilities, which allow creators of (typically) permissioned or private blockchains the ability to pick the model that best suits their needs.  The list above is by no means an exhaustive list, but hopefully this gives you a bit more clarity into the world of blockchain consensus.

Let’s block ads! (Why?)

The TIBCO Blog