• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Category Archives: Tableau

Fear! Excitement! Trends disrupting your career in the 2020s!

December 3, 2020   Tableau
shutterstock 174475871 Fear! Excitement! Trends disrupting your career in the 2020s!

Do you feel left behind? If you are witnessing today’s unprecedented speed of technological change, a sense of apprehension would not be surprising.

In a hyperspeed environment, individuals cannot gain expertise quickly enough. By the time you learn and master a topic, that expertise already seems obsolete. From a corporate perspective, a lack of technology talent pushes companies toward other options, one of which is automation to reduce reliance upon human workers.

Our current automation trend will eliminate a large chunk of today’s jobs. However, that same disruption causes many more opportunities to emerge for the right individuals. Do not be careless and allow your career to be destroyed in the 2020s. Be aware of technology trends, prepare, and pivot to a new place of success.

Here are five areas you should watch in the next few years.

CLOUD

Motivated by competitive and cost-saving reasons, companies are migrating on-premise applications and data onto cloud platforms such as Amazon Web Services, Microsoft Azure, and Google Cloud. Companies benefit by eliminating the significant overhead private data centers, hardware, and support services. Plus, they gain market agility by being able to scale digital assets almost immediately.

Companies will eliminate the jobs associated with legacy, on-premise support. With talent at a premium, companies may retrain these individuals for other roles, but the cold reality is most management might instead eliminate long-time legacy employees and replace them with new talent.

Software vendors will reduce costs by moving to cloud-only solutions, eliminating the need to install and support on-premise applications. Field technical staff who performed these roles will no longer be needed; only a smaller core group for the centralized cloud support will remain.

AUTOMATION

Companies will continue automation, combining software with artificial intelligence and machine learning, to reduce costs, gain competitive advantages, and increase revenue. With smart automation, firms will replace many individuals who perform repeatable tasks in controlled environments. It’s a simple decision: automated work can be performed 24×7 without stoppage and at lower costs than humans.

Vendors will provide automated tools to accelerate the movement from on-premise applications to the cloud platforms. During the next few years, you will see a mad rush to push business applications onto cloud platforms.

NEW COMPETITORS

New endeavors with new technology will emerge and disrupt legacy businesses. One advantage is they do not have the baggage of legacy platforms, bureaucracy, and long-time employees. These nimble barbarians will attack the fortresses of established empires, speeding the decline of well-known companies.

DISPOSABLE TECHNOLOGY

Rapid technological improvements mean that older technology needs to be thrown away sooner. Ongoing changes, new competition, along with lack of talent pushes companies to speed their elimination of legacy applications and old ways of doing business.

When everything becomes a paid cloud service, companies need fewer technical support employees. However, individuals who can train others in emerging technology will be important. Because modern tools change quickly these tech trainers must quickly pivot and learn. Instead of working for one company, these individuals may provide global online services, generating both active and passive income. To meet demand, online courses and certifications will grow.

CELEBRITY TALENT

While automation will eliminate many legacy jobs, rapidly changing technologies and lack of resources will provide a Wild West goldrush for savvy individuals in the 2020s. Some will become solopreneurs with a strong social media presence, causing firms to find a new HR model, other than their legacy comand-and-control methods designed to restrict employee behaviors.

Instead of dependency upon full-time employees, companies will leverage project-based, remote talent who can be shuffled in and out as needed. Firms will need to develop the culture and skills for working with free agents. A corporate initiative will begin to resemble the effort of producing a blockbuster movie using contracted talent during development.

As a result of a smaller talent base, corporate work conditions will change. Work-from-home will gain even more acceptance and individuals will not need to live within an hour commute of a downtown office building. Instead, technology will be securely available from cloud platforms and workers will spread out, with less clustering in mega-urban centers. Headquarters will become occasional convention sites for talent community-building, celebration, and edification events.

Depending on your worldview, the technology trends of the 2020s are either fear-inducing dangers or exciting opportunities. Make sure you wield this double-edge sword properly. Contact me at Doug@kencura.com.

Let’s block ads! (Why?)

Business Intelligence Software

Read More

Preparing for FOCUS-to-WebFOCUS Conversions

November 30, 2020   Tableau

 If you are considering converting your FOCUS 4GL environment to the new web-based version, here are some things you need to know.

Many people want to understand the difference between FOCUS and WebFOCUS and come to my blog looking for a comparison between the two products, so let me start there.

Both are software products from Information Builders and both share a common 4GL processor. In fact, the vendor in recent years has been able to consolidate these two products into a single code base, which is fairly portable and independent of any particular operating system.

The FOCUS product was used both interactively and in batch. Online users could communicate with menus and screens for providing information or go directly to a command processor for simple ad-hoc requests. Programs could also be run using JCL or other batch control mechanism with parameters passed in or determined by the program itself.

There are two three broad components of the FOCUS 4GL, the main piece being a non-procedural language for reporting, graphing, analysis, and maintaining data. There is also a procedural scripting language (Dialogue Manager) that provides some logical control of the embedded non-procedural code, symbolic variable substitutions, and multi-step complex processes. These are critical to enabling WebFOCUS to perform complex, dynamically-generated web applications.

A third important component is the metadata and adapter layer, which hides the complexity of the underlying data structures, allowing developers and end users to write 4GL programs with minimal knowledge of the data.

Major Features of the Procedural Scripting (Dialogue Manager):

  • Symbolic variable substitutions (calculations, prompting, file I/O, etc.)
  • System variables (date, time, userid, platform, environment settings, etc.)
  • Calculations of temporary variables
  • GOTO branch controls and procedural labels (non-conditional as well as IF-THEN-ELSE conditional branching)
  • Embedded operating system commands
  • External file I/O
  • Green-screen interactive with the user (not functional in WebFOCUS)
  • Executing procedures (EXEC command and server-side code inclusions)

Major Features of the Non-Procedural Scripting (FOCUS 4GL):

  • Reports and output files (TABLE facility)
  • Graphs (GRAPH facility)
  • Joining files (JOIN facility)
  • Matching files (MATCH facility)
  • Database maintenance (MODIFY facility; non-screen features supported in WebFOCUS, otherwise replaced by MAINTAIN)
  • Statistical analysis (ANALYZE facility; was rarely used and not ported to WebFOCUS; recently R Stat support was added)
  • Environment settings (SET phrases)
  • Calculation of temporary columns (DEFINE and COMPUTE phrases)

FOCUS-to-WebFOCUS Conversion issues:
Despite the portable FOCUS 4GL that lies beneath the covers of WebFOCUS, there are still some considerable challenges to converting from legacy to web-based architectures. I have solved some of those problems for you by automating the process. Below are some conversion issues and their potential solutions.

1) Major architectural change (single technology stack to enterprise web stack)

Solution: architect a solution that minimizes change
Solution: for new WebFOCUS app path commands, automatically add to existing code

2) New end user environment

Solution: automatically convert existing 4GL programs for users; generate scripts for loading Managed Reporting Environment; provide user training

3) Persistent sessions not supported in web environment

Solution: analyze and determine how to replicate persistence (for example, loss of “global” variables)

4) Batch processing handled differently in web environment

Solution: replicate batch jobs using WebFOCUS ReportCaster scheduler/distribution product

5) Output report formats default to HTML, which does not respect original layout

Solution: automatically add stylesheets and PDF support

6) Dumb terminal green-screens not supported in WebFOCUS

Solution: for simple menus, convert to HTML
Solution: for simple data maintenance, convert to HTML and MODIFY
Solution: for complex data maintenance, convert to MAINTAIN

7) WebFOCUS eliminated some legacy FOCUS features (text editor, end-user wizards, type to screen, ANALYZE statistical facility, etc.)

Solution: analyze and develop work-around

8) New Graph engine

Solution: automatically add support for new graph rendering (third-party Java product)

9) If moving to new platform, multiple problems, including access to legacy data, embedded OS commands, file names, allocations, user-written subroutines, userids, printer ids, integrated third-party tools (e.g., SAS, SyncSort, OS utilities), etc.

Solution: analyze and automatically convert as much as possible

10) Organization typically wants to take advantage of new features quickly

Solution: automatically add some support during conversions (e.g., spreadsheets, dynamic launch pages to consolidate existing FOCUS code) — in other words, get rid of the legacy product as quickly as possible by doing a straight replication, but try to give the business some new things in the process

Trying to manually convert FOCUS to WebFOCUS is just not a good approach. By utilizing a proven methodology and software toolkit for automating much of the manual effort, you will dramatically reduce the time, cost, skill-set requirements, and risk of doing the legacy replacement.

Be sure to read some of my other blogs on this topic.  A good place to start is here.

If you have questions, feel free to contact me.

You may also be interested in these articles:
  • White Paper on Automating BI Modernizations
  • BI Modernization Frequently Asked Questions
  • Using Text Data Mining and Analytics for BI Modernizations
  • Using Word Clouds to Visually Profile Legacy BI Applications
  • DAPPER Methodology for BI Modernizations
  • Leave a Legacy: Why to Get Rid of Legacy Reporting Apps
  • Moving off the Mainframe with Micro Focus
  • Preparing for FOCUS-to-WebFOCUS Conversions
  • Converting the NOMAD 4GL to WebFOCUS
  • Convert FOCUS Batch JCL Jobs for WebFOCUS
  • Automatically Modernize QMF/SQL to WebFOCUS
(originally posted on 2009 Feb 03)

Let’s block ads! (Why?)

Business Intelligence Software

Read More

Merging Blogs

November 26, 2020   Tableau
Doug%2BCropped%2Bin%2BLondon Merging Blogs

I am in the process of merging two blogs together. 

For the time being, see this soon-to-be-eliminated blog: BI Software | Business Intelligence (bi-software-webfocus.blogspot.com).

Let’s block ads! (Why?)

Business Intelligence Software

Read More

Taking the Mystery out of Big Data

July 10, 2015   Tableau
Today’s companies have the potential to benefit from incredibly large amounts of data.

To shake off the mystery of this “Big Data,” it’s useful to know its history.


In the not-so-distant past, firms tracked their own internal transactions and master data (products, customers, employees, and so forth) but little else. Companies probably only had very large databases if their industry called for high-volume and high-speed applications such as telecommunication, shipping, or point of sales. Even then, those transactions were all formatted in a standard way and could be saved inside the relational database IBM designed in the 1960s.

This was perfectly fine for corporate computing in the 1970s and 1980s. Then, in the middle of the 1990s, along came the world-wide web, browsers, and e-commerce. Before the end of that decade, a web search engine company named Google was facing challenges as to how to track all of the changes happening all over global web pages. A traditional computing option would have been to scale-up: get a bigger platform, a more powerful database engine, and more disk space.

But spending money wasn’t a good option for a little operation like Google; it was well behind the established search engines like Lycos, WebCrawler, AltaVista, Infoseek, Yahoo, and others.

Google decided on a strategy of scaling out instead of up. Using easily-obtained commodity computers, they spread out not only the data but the application processing. Instead of buying a big super-computer, they used thousands of run-of-the-mill boxes all working together. On top of this distributed data framework, they built a processing engine using a common software technique known as Map-Shuffle-Reduce.

Of course, a scale-out paradigm meant Google now had multiple places where a failure could happen when writing data or running a software process. One or more of those thousands of cheap computers could crash and mess up everything. To deal with this, Google added automated data replication and fail-over logic to handle bad situations under the covers and still make everything work as expected for the user.

In 2003 in a published document, Google explained to the world their distributed data storage methods. The following year, they disclosed details on their parallel-processing engine.

One reader of Google’s white papers was Doug Cutting, who was working on an Apache Software Foundation open-source software spider/crawler search engine called Nutch. Like Google, Doug had run into issues handling the complexity and size of large-scale search problems. Within a couple of years, Doug applied Google’s techniques to Nutch and had it scaling out dramatically.

Understanding its importance, Doug shared his success with others. In 2006 while working with Yahoo, Doug started an Apache project called “Hadoop,” named after his daughter’s stuffed toy elephant. By 2008, individuals familiar with this new Hadoop open-source product were forming companies to provide complementary products and services.

With our history lesson over, we are back to the present. Today, Hadoop is an entire “ecosystem” of offerings available not only from the Apache Software Foundation but from for-profit companies such as Cloudera, Hortonworks, MapR, and others. Volunteers and paid employees around the world work diligently and passionately on these open-source Big Data software offerings.

When you hear somebody say “Big Data,” he or she typically refers to the need to accumulate and analyze massive amounts of very diverse and unstructured data that cannot fit on a single computer. Big Data is usually accomplished using the following:

  • Scale-out techniques to distribute data and process in parallel
  • Lots of commodity hardware
  • Open-source software (in particular, Apache Hadoop)


Hadoop Taking the Mystery out of Big Data


Large companies with terabytes of transactions stored in an enterprise data warehouse on database computers or applications like Teradata or Netezza are not doing Big Data. Sure, they have very large databases but that’s not “big” in today’s sense of the word.

Big Data comes from the world around the company; it’s generated rapidly from social media, server logs, machine interfaces, and so forth. Big Data doesn’t follow any particular set of rules, so you will be challenged when trying to slap a static layout on top of it and make it conform. That’s one big reason why traditional relational database management systems (RDBMSs) cannot handle Big Data.

The term “Hadoop” usually refers to several pieces of Big Data software:

  • The “Common” modules, handling features such as administration, management, and security
  • The distributed data engine, known as Hadoop Distributed File System (HDFS)
  • The parallel-processing engine (either the traditional MapReduce framework now known as YARN or an emerging one called Spark)
  • A distributed data warehouse feature on top of the HDFS (HBase for standard reporting needs or Cassandra for active, operational needs)


In addition to the basic Hadoop software, however, there are lots of other pieces. For putting data into Hadoop, for example, you have several options:

  • Programmatically with languages (e.g., Java, Python, Scala, or R), you can use Application Programming Interfaces (APIs)  or a Command Line Interface (CLI)
  • Streaming data using the Apache Flume software
  • Batch file transfers using the Sqoop module
  • Messages using the Kafka product


When pulling data out of Hadoop, you have other open-source options:

  • Programmatically with languages
  • Hbase, Hive with HiveQL, or Pig with PigLatin which all provide easier access than using MapReduce against the underlying distributed file system   
  • Elasticsearch or Solr for searching
  • Mahout for automated machine learning
  • Drill, an always-active “daemon” process, which acts as a query engine for data exploration


But why would you want the complexity of this “Big Data?”

It was obvious for Google and Nutch, search engines trying to scour and collect bytes from the entire world-wide web.  It was their business to handle Big Data.

Any large firm is on the other end of Google; they have a web site which people browse and use, quite probably navigating to it from Google’s search results. One Big Data use case for most companies would therefore be to do large-scale analysis of its web server logs. In particular, they could look for suspicious behavior that suggests some type of hacking attempt. Big Data can protect your company from cybercrimes.

If you offer products online, a common Big Data use case would be as a “recommendation engine.” A smart Big Data application can provide each customer with personalized suggestions on what to buy. By understanding the customer as an individual, Big Data can improve engagement, satisfaction, and retention.

Big Data can be a more cost-effective method of extracting, transforming, and loading data into an enterprise data warehouse. Apache open-source software might replace and modernize your expensive proprietary COTS ETL package and database engines. Big Data could reduce the cost and time of getting your BI results.  

It’s a jungle out there; there’s fraud happening. You may have some bad customers with phony returns, a bad manager trying to game the system for bonuses, or entire groups of bad hackers actively involved in scamming money from your company. Big Data can “score” financial activities and provide an estimate of how likely individual transactions are fraudulent.

Most companies have machine-generated data: time-and-attendance boxes, garage security gates, badge readers, manufacturing machines with logs, and so forth. These are examples of the emerging tsunami of “Internet of Things” data. Capturing and analyzing time-series events from IoT devices can uncover high-value insights of which we would otherwise be ignorant.

The real key to Big Data success is having specific business problems you need to solve and on which you would take immediate action.

One of my clients was great about focusing on problems and taking actions. They had pharmacies inside their retail stores and, each week, a simple generated report showed the top 10 reasons insurance companies rejected their pharmacy claims. Somebody was then responsible for making sure the processing problems behind the top reasons went away.

Likewise, the company’s risk management system identified weekly the top 10 reasons customers got hurt in the stores (by the way, the next time you are in a grocery store, thank the worker sweeping up spilled grapes from the floor around the salad bar). This sounds simple, but you might be surprised the extreme business benefits obtained from constantly solving the problems from the top of a dynamic Top-10 list. 

Today, your company may be making the big mistake of ignoring the majority of data around it. Hadoop and its ecosystem of products and partners make it easier for everybody to get value from Big Data.


We are truly just at the beginning of this Big Data movement. Exciting things are still ahead. 

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Business Intelligence Software

Read More

Information Builders Talking Big Data at Summit 2015

June 16, 2015   Tableau
In just a couple of weeks, Information Builders will hold its annual user conference in Kissimmee, Florida. Many of the topics at Summit 2015 will deal with Big Data.

Summit%2B2015 Information Builders Talking Big Data at Summit 2015

Be sure to attend the following sessions:

Real-Time Analytics on Hadoop
Tom White, MapR Technologies
Eric Greisdorf, Information Builders
Sunday 2:00PM – 3:00PM
We’ve all heard that the market is demanding big data solutions that provide real-time insights on their data. With the countless claims of companies solving this problem, how can you discern fact from fiction? And how do these solutions support WebFOCUS in providing real-time insights? Join MapR Technologies, an Information Builders partner and provider of the leading Hadoop distribution, to learn how MapR and WebFOCUS deliver on the promise of true real-time data analytics.


The Role of BI in Big Data
John Thuma, Teradata Big Data Practice
Sunday 3:15PM – 4:15PM
Big data is not a product or a service. Big data is a movement. Understanding how you can leverage big data from within your enterprise may be a challenge. Business intelligence (BI) and data warehousing have matured into technology, process, and people. In this session, we will discuss how BI tools fit into this new big data zoo. The secret is, there is no secret. Don’t forget what you already know.


Systems of Insight: A New Approach to BI to Make YourBig Data Actionable
Boris Evelson, Forrester Research
Monday 1:30PM – 2:30PM
Customer insight teams, agile business intelligence (BI) investments, and big data buzz have grown at breakneck rates as organizations try to capitalize on new data with limited success. To break through the data fog, technology leaders need new approaches to systematically link data directly to insight and action. In this session, Mr. Evelson will answer questions such as: (1) What will it take for organizations to start using more of its data for analysis and insights? Today, the average organization uses only 12 percent of its data. (2) Why is business agility a key success factor in the age of the customer, and what impact does it have on your earlier-generation data management and BI investments? (3) What are the key differences between earlier-generation BI and the leading-edge systems of insights? (4) What are the key components of the new-generation systems of insights (processes, people, technology)?


What Direction Is the BI Market Heading?
Howard Dresner, Dresner Advisory Services, LLC
Monday 4:00PM – 5:00PM
In this session, veteran industry analyst Howard Dresner shares the latest findings from his annual “Wisdom of Crowds Business Intelligence Market Study.” He’ll answer questions such as: Who’s driving business intelligence (BI) within the organization? Who are the targeted users and how are they changing? Which organizations are most successful with BI and why? What do organizations hope to achieve with BI and how is that changing over time? Which technologies and initiatives are most important, which are climbing, and which are falling? What is the current state of data and how has this changed since last year? How are people sharing BI-derived insights within their organizations and has this improved since 2014? How has user adoption of BI changed in recent years and why?


The Big Deal in Big Data and Internet of Things isAnalytics and BI
Mark Smith, Ventana Research
Tony Cosentino, Ventana Research
Tuesday 11:00AM – 12:00PM
In today’s applications, systems, and devices, there is data being generated every second of the day that can either overwhelm an organization, or improve its effectiveness. Smart organizations architect their enterprise to integrate and process data from any location, including cloud computing and the Internet of Things (IoT), and at any time to deliver analytics and business intelligence (BI) that improve performance. Using a business perspective on technology and IT is required to bring the right analytics and BI technology and skills to an organization. Moving beyond the hype on agile and self-service BI requires a focus on the metrics and information people need to be effective in their roles and responsibilities. Unveiling the latest in analytics and data research across business and IT, Ventana’s Tony Cosentino and Mark Smith will provide best practices and steps to help any organization be effective in using big data for a strategic advantage in analytics and BI.


Interested in Big Data and Hadoop?
Stephen Mooney, Information Builders
Tuesday 11:00AM – 12:00PM
Are you interested in learning how iWay leverages the Hadoop ecosystem? Join us for an informative session on big data, where we will show you how iWay is harnessing the power of technologies like Sqoop, Flume, Kafka, Storm, and HDFS to provide a simplified and reliable data integration platform.


How to Train an Elephant: AccessingData Managed by Hadoop
Clif Kranish, Information Builders
Wednesday 9:45AM – 10:45AM
Many organizations now rely on Hadoop for their big data initiatives. In this presentation, we will show you how data managed by Hadoop can be staged by DataMigrator and used by WebFOCUS. We will cover how to use the data adapter for Hive and when to use Impala or Tez. You will learn how arrays and other complex Hive data types are supported, and how to take advantage of alternatives to HDFS, such as MapR-FS. We will also introduce the new Phoenix adapter for the NoSQL database HBase, which is distributed with Hadoop.

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Business Intelligence Software

Read More

Wendy’s Wins Big with BI/Analytics

November 11, 2014   Tableau   No comments
Congratulations to my friends at The Wendy’s Company for being honored yesterday with Information Builders’ 2014 Award of Distinction.

IB wrote this about Wendy’s enterprise dashboard:


Wendy%2527s+Logo+New Wendys Wins Big with BI/Analytics

“The Wendy’s Company, the world’s third largest quick-service hamburger company, created a BI portal and dashboard environment that integrates an enterprise point-of-sale system to deliver targeted reports with drill-down capabilities for decision-makers at every level of the company. WebFOCUS helps managers control costs and make informed decisions that improve the bottom line. Thousands of international and domestic franchises currently use WebFOCUS dashboards, helping Wendy’s to improve profit margins at hundreds of restaurants.” 

At the beginning, the idea was for an “above-the-store” executive portal where a few individuals could see all of the company’s KPIs related to revenue, speed of service, costs, and customer satisfaction. However, it did not take long before thousands of decision makers at different levels of the QSR organization asked for access to that valuable information.

For more information, see IB’s press release. 

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Business Intelligence Software

Read More

Lessons from Doorbell Replacement

November 9, 2014   Tableau   No comments
How hard can it be to replace the button for a doorbell?

That was the task my wife gave me and it appeared within my abilities. Surely, I could be done within an hour.

My wife had already spent thirty dollars on a brushed metal doorbell that looked nice during the day with a back-lit button you could see at night.

On Saturday, I jumped into action and removed the old buzzer button, a simple twenty-year old plastic box. I pulled the doorbell away from the door frame and undid the twisted wires. Quite brittle, they broke easily. To get more length, I tried to pull the wires out farther, but they would not budge. If I was not careful, I would have to instead buy a wireless doorbell.

I examined the new doorbell as I took it out of the package. While the old one had just laid on top of the wood, this one had an inch-long metal piece that was supposed to fit inside a 5/8-inch wide hole. A search of my tool box came up empty for that particular drill bit. Plus, I didn’t really believe the doorframe was deep enough.

“Look,” I explained to my wife, “this isn’t going to work” and presented various reasons to discard her plan.

With open disappointment, she agreed; I put the old button back in place. Ding dong, it still worked.

At the home improvement store, we bought a different thirty-dollar brushed-metal doorbell button; this one could lay flat against the door frame but did not have a light, a compromise.

On the next Saturday, I went back to work. How hard could this be?

I once again removed the old doorbell button. This new one had two pieces: a back that attached to the door frame with screws, and a front that snapped onto the back. The wires gave me grief, but I was finally able to attach the new doorbell button.

This one would not lay flat; some type of plastic protrusion on the back always got in the way. The last doorbell ringer needed a hole, so I considered that as a potential solution for this situation. I got out a power drill and started poking little holes in the door frame.

Ultimately, I was able to get the button to lay flat. When I tried to snap on the front, however, it would not close; something was preventing the snap from catching. I completely removed the doorbell, busted some more holes behind it, hooked it up again, took it off, and repeated several times.

By now I was frustrated. It wouldn’t snap close, so I decided to try to keep it shut with some Gorilla Glue.

No luck. With brown glue spots all over the door frame and a ruined doorbell, I had failed. Trying to remove the Gorilla Glue mess, I scrubbed off patches of door frame paint. I tossed the thirty-dollar buzzer in the trash and once again returned the old plastic one to its proper place. Ding dong, it still worked.

Okay, I needed to stop and think about this. What approach was best? I still had the original back-lit doorbell buzzer that my wife wanted. I needed the right tools to do the job.

I made another trip to the home improvement store and bought a 5/8-inch hole drill bit with diamond grit (coincidentally, another thirty dollars). While there, I picked up white paint to cover the Gorilla Glue fiasco.

On the third Saturday, I removed the old doorbell buzzer, drilled the hole, and put in the new buzzer; it just barely fit. With some silicon chalking around the button and some white paint to cover mistakes, all was good. Ding dong!

After replacing this legacy piece of hardware, here are some of my personal insights:

  • Doorbell%2BButton Lessons from Doorbell Replacement
  • I started without assessing the situation 
  • I never had a proper plan  
  • Having never done this before, I did not have the proper know-how, expertise, or skills
  • I did not have the proper tools to do the job 
  • It took longer than expected (especially without plan, skills, or tools)
  • It cost more than expected 
  • I could have saved by hiring a professional

My personal experience with a doorbell buzzer is similar to companies replacing their legacy business systems. How hard could it be, for example, to get rid of old reporting applications and convert all of the existing procedures to newer technology?

Upper management already bought the new BI product, so you just assign the conversion effort to the college intern. How hard could it be? Surely, she can knock it out quickly.

Ding dong: no up-front assessment, no planning, no accurate expectations as to time and cost, no specialized skills or tools, minimal progress every Saturday.

You may consider a legacy system modernization initiative as a one-off project your team can just fumble through and then forget about. That can be the painful approach and you may have to cover up mistakes afterwards. Before you do that, consider there are professionals who have done modernizations before and who have developed methodologies and automated software to reduce the time, cost, and risk.

Don’t be a ding dong. 

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Business Intelligence Software

Read More
  • Recent Posts

    • GIVEN WHAT HE TOLD A MARINE…..IT WOULD NOT SURPRISE ME
    • How the pandemic is accelerating enterprise open source adoption
    • Rickey Smiley To Host 22nd Annual Super Bowl Gospel Celebration On BET
    • Kili Technology unveils data annotation platform to improve AI, raises $7 million
    • P3 Jobs: Time to Come Home?
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited