• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Monthly Archives: November 2020

Laundry Room Thunderstruck

November 30, 2020   Humor

Posted by Krisgo

About Krisgo

I’m a mom, that has worn many different hats in this life; from scout leader, camp craft teacher, parents group president, colorguard coach, member of the community band, stay-at-home-mom to full time worker, I’ve done it all– almost! I still love learning new things, especially creating and cooking. Most of all I love to laugh! Thanks for visiting – come back soon icon smile Laundry Room Thunderstruck


Let’s block ads! (Why?)

Deep Fried Bits

Read More

Get a Microsoft Dynamics 365/CRM Estimate without Engaging a Salesperson

November 30, 2020   Microsoft Dynamics CRM
crmnav Get a Microsoft Dynamics 365/CRM Estimate without Engaging a Salesperson

There’s never been a better time to investigate the versatility, power, and advanced features of Microsoft Dynamics 365. Now more than ever, businesses are looking for tools to make their workers, both onsite and remote, more efficient, accurate, productive, and secure. Reading posts by our expert members on the CRM Software Blog will answer a lot of your questions about Dynamics 365’s great features, along with suggestions about how it can help your business.

But one thing you’ll never see in a post is a price quote. Naturally, a competent partner will want to sit down with you and discuss your particular business needs and goals. That’s because there are so many variables, even for businesses within the same industry.

Perhaps you’re not ready to sit down with a salesperson just yet. Maybe you’d like an estimate to see if Dynamics 365/CRM will fit into your budget. Good news! We have a tool for that: The CRM Software Blog’s Quick Quote Tool.

The Quick Quote Tool

We developed The CRM Software Blog’s Quick Quote tool years ago.  As the industry has evolved and technology progressed, we’ve adjusted the tool to keep pace. The Quick Quote tool now provides a working estimate for Microsoft Dynamics 365, Microsoft’s solution that integrates ERP and CRM. You’ll get an estimated price for the total cost of software, implementation, training, and ongoing expenses. The Quick Quote tool is a hassle-free way to determine if Microsoft Dynamics 365 is a fit for your business and your budget.

The Quick Quote tool takes only a few minutes and is completely free. Find it on the right side of any page of The CRM Software Blog. Click on the orange bar labeled “Request Instant Quote Dynamics 365/CRM”. Fill out the Microsoft Dynamics 365 Quick Quote request form to let us know whether you’re interested in the Business Edition or Enterprise Edition, what level you want (Basic, Basic Plus, or Advanced), how many users you anticipate, and your contact information. It’s as easy as that. Click submit, and within a couple of minutes, a personalized proposal will appear in your inbox.

The proposal will contain a detailed budgetary estimate, as well as information about setup and training, client testimonial videos, and a dozen or so links to helpful information so you can learn all about how Microsoft Dynamics 365 can be used to the greatest advantage at your company. Your contact information will be forwarded to just one of our CRM Software Blog members who will be glad to answer any questions you have and work with you on the installation if you choose. Of course, both the estimate and the partner referral are non-binding. They are provided for your convenience.

So, why not try the Quick Quote tool? It’s fast, and it’s free. Get your Microsoft Dynamics Quick Quote estimate now!

By CRM Software Blog Writer, www.crmsoftwareblog.com

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Teradata Announces Changes to Board of Directors

November 30, 2020   BI News and Info
teradata logo social Teradata Announces Changes to Board of Directors

David Kepler and James Ringler to Retire; Board Size to be Reduced to Nine Directors

Teradata (NYSE: TDC), the cloud data analytics platform company, today announced that David Kepler and James Ringler intend to retire from the Board of Directors as of the time of the 2021 Annual Meeting of Stockholders. With these changes, Teradata will reduce the size of its Board to nine members, eight of whom will be independent. In connection with today’s announcement, and reflective of Teradata’s ongoing board succession planning, Kimberly Nelson, a director of Teradata since November 2019 and the Executive Vice President and Chief Financial Officer of SPS Commerce, Inc., has been appointed Chair of the Board’s Audit Committee, effective January 1, 2021.

Michael Gianoni, Chairman of the Teradata Board of Directors, stated, “On behalf of the entire Board, I want to extend my gratitude to Dave and Jim for their distinguished service and significant contributions to Teradata over many years. Both Dave and Jim have been integral members of our Board since 2007 and we wish them all the best going forward.”

Mr. Gianoni continued, “As a Board, our focus is on best-in-class corporate governance that aligns with the execution of the Company’s long-term strategy. With effective and agile oversight, and a leadership team accelerating its cloud-based strategy, Teradata is extremely well-positioned to continue delivering for our customers, supporting our people and driving outstanding top and bottom line expectations to enhance shareholder value. The solid third quarter 2020 financial results announced today reflect the efforts of Steve McMillan and the entire team, and we look forward to continued success.”

Let’s block ads! (Why?)

Teradata United States

Read More

Preparing for FOCUS-to-WebFOCUS Conversions

November 30, 2020   Tableau

 If you are considering converting your FOCUS 4GL environment to the new web-based version, here are some things you need to know.

Many people want to understand the difference between FOCUS and WebFOCUS and come to my blog looking for a comparison between the two products, so let me start there.

Both are software products from Information Builders and both share a common 4GL processor. In fact, the vendor in recent years has been able to consolidate these two products into a single code base, which is fairly portable and independent of any particular operating system.

The FOCUS product was used both interactively and in batch. Online users could communicate with menus and screens for providing information or go directly to a command processor for simple ad-hoc requests. Programs could also be run using JCL or other batch control mechanism with parameters passed in or determined by the program itself.

There are two three broad components of the FOCUS 4GL, the main piece being a non-procedural language for reporting, graphing, analysis, and maintaining data. There is also a procedural scripting language (Dialogue Manager) that provides some logical control of the embedded non-procedural code, symbolic variable substitutions, and multi-step complex processes. These are critical to enabling WebFOCUS to perform complex, dynamically-generated web applications.

A third important component is the metadata and adapter layer, which hides the complexity of the underlying data structures, allowing developers and end users to write 4GL programs with minimal knowledge of the data.

Major Features of the Procedural Scripting (Dialogue Manager):

  • Symbolic variable substitutions (calculations, prompting, file I/O, etc.)
  • System variables (date, time, userid, platform, environment settings, etc.)
  • Calculations of temporary variables
  • GOTO branch controls and procedural labels (non-conditional as well as IF-THEN-ELSE conditional branching)
  • Embedded operating system commands
  • External file I/O
  • Green-screen interactive with the user (not functional in WebFOCUS)
  • Executing procedures (EXEC command and server-side code inclusions)

Major Features of the Non-Procedural Scripting (FOCUS 4GL):

  • Reports and output files (TABLE facility)
  • Graphs (GRAPH facility)
  • Joining files (JOIN facility)
  • Matching files (MATCH facility)
  • Database maintenance (MODIFY facility; non-screen features supported in WebFOCUS, otherwise replaced by MAINTAIN)
  • Statistical analysis (ANALYZE facility; was rarely used and not ported to WebFOCUS; recently R Stat support was added)
  • Environment settings (SET phrases)
  • Calculation of temporary columns (DEFINE and COMPUTE phrases)

FOCUS-to-WebFOCUS Conversion issues:
Despite the portable FOCUS 4GL that lies beneath the covers of WebFOCUS, there are still some considerable challenges to converting from legacy to web-based architectures. I have solved some of those problems for you by automating the process. Below are some conversion issues and their potential solutions.

1) Major architectural change (single technology stack to enterprise web stack)

Solution: architect a solution that minimizes change
Solution: for new WebFOCUS app path commands, automatically add to existing code

2) New end user environment

Solution: automatically convert existing 4GL programs for users; generate scripts for loading Managed Reporting Environment; provide user training

3) Persistent sessions not supported in web environment

Solution: analyze and determine how to replicate persistence (for example, loss of “global” variables)

4) Batch processing handled differently in web environment

Solution: replicate batch jobs using WebFOCUS ReportCaster scheduler/distribution product

5) Output report formats default to HTML, which does not respect original layout

Solution: automatically add stylesheets and PDF support

6) Dumb terminal green-screens not supported in WebFOCUS

Solution: for simple menus, convert to HTML
Solution: for simple data maintenance, convert to HTML and MODIFY
Solution: for complex data maintenance, convert to MAINTAIN

7) WebFOCUS eliminated some legacy FOCUS features (text editor, end-user wizards, type to screen, ANALYZE statistical facility, etc.)

Solution: analyze and develop work-around

8) New Graph engine

Solution: automatically add support for new graph rendering (third-party Java product)

9) If moving to new platform, multiple problems, including access to legacy data, embedded OS commands, file names, allocations, user-written subroutines, userids, printer ids, integrated third-party tools (e.g., SAS, SyncSort, OS utilities), etc.

Solution: analyze and automatically convert as much as possible

10) Organization typically wants to take advantage of new features quickly

Solution: automatically add some support during conversions (e.g., spreadsheets, dynamic launch pages to consolidate existing FOCUS code) — in other words, get rid of the legacy product as quickly as possible by doing a straight replication, but try to give the business some new things in the process

Trying to manually convert FOCUS to WebFOCUS is just not a good approach. By utilizing a proven methodology and software toolkit for automating much of the manual effort, you will dramatically reduce the time, cost, skill-set requirements, and risk of doing the legacy replacement.

Be sure to read some of my other blogs on this topic.  A good place to start is here.

If you have questions, feel free to contact me.

You may also be interested in these articles:
  • White Paper on Automating BI Modernizations
  • BI Modernization Frequently Asked Questions
  • Using Text Data Mining and Analytics for BI Modernizations
  • Using Word Clouds to Visually Profile Legacy BI Applications
  • DAPPER Methodology for BI Modernizations
  • Leave a Legacy: Why to Get Rid of Legacy Reporting Apps
  • Moving off the Mainframe with Micro Focus
  • Preparing for FOCUS-to-WebFOCUS Conversions
  • Converting the NOMAD 4GL to WebFOCUS
  • Convert FOCUS Batch JCL Jobs for WebFOCUS
  • Automatically Modernize QMF/SQL to WebFOCUS
(originally posted on 2009 Feb 03)

Let’s block ads! (Why?)

Business Intelligence Software

Read More

Facebook acquires messaging marketing automation startup Kustomer

November 30, 2020   Big Data
 Facebook acquires messaging marketing automation startup Kustomer

When it comes to customer expectations, the pandemic has changed everything

Learn how to accelerate customer service, optimize costs, and improve self-service in a digital-first world.

Register here

Facebook today announced it will acquire Kustomer, a New York-based customer relationship management startup, for an undisclosed amount. When the deal closes, Facebook says it will natively integrate Kustomer’s tools with its messaging platforms, including WhatsApp and Messenger, to allow businesses and partners to better manage their communications with users.

For most brands, guiding and tracking customers through every step of their journeys is of critical operational importance. According to a recent PricewaterhouseCoopers report, the number of companies investing in omnichannel experiences has jumped from 20% to 80%, and an Adobe study found that those with the strongest omnichannel customer engagement strategies enjoy 10% year-over-year growth on average and a 25% increase in close rates.

“We’ve witnessed this shift firsthand as every day more than 175 million people contact businesses via WhatsApp. This number is growing because messaging provides a better overall customer experience and drives sales for businesses,” Facebook VP of ads and business products Dan Levy and WhatsApp COO Matt Idema wrote in a blog post. “As businesses adjust to an evolving digital environment, they’re seeking solutions that place people at the center, especially when it comes to communication. Any business knows that when the phone rings, they need to answer it. Increasingly, texts and messages have become just as important as that phone call — and businesses need to adapt.”

AOL and Salesforce veterans Brad Birnbaum and Jeremy Suriel founded Kustomer in 2015, which went on to attract customers including Sweetgreen, Ring, Glossier, Rent the Runway, Away, and Glovo. The company’s platform let clients search, display, and report out-of-the-box on objects like “customers” and “companies,” with tweakable attributes such as orders, feedback scores, shipping, tracking, web events, and more. On the AI side of the equation, Kustomer offered a conversational assistant that collects customer information for human agents and auto-routes conversations.

Kustomer’s workflow and business logic engines supported the creation of conditional, multi-branch flows that enabled each step to use the output of any previous step and to trigger responses based on defined events from internal or third-party systems. From a dashboard, managers could view which agents are working in real time and launch customer satisfaction surveys (or view the results of recent surveys). The dashboard also exposed sentiment to provide a metric for overall customer service effectiveness, and it enabled admins to customize Kustomer’s self-service, customer-facing knowledge base with articles, tutorials, and rich media including videos, PDFs, and other formats.

Last year saw the launch of KustomerIQ, which allowed companies to train AI models to address their unique business needs. The models in question could automatically classify conversations and customer attributes, reading messages between customers and agents using natural language processing techniques.

Prior to the Facebook acquisition, Kustomer raised $ 173.5 million across six fundraising rounds. Earlier this morning, The Wall Street Journal reported that the deal announced today could value the startup at more than $ 1 billion.

Birnbaum, Suriel, and the rest of the Kustomer team will join Facebook once the transaction is approved. Facebook says that Kustomer businesses will continue to own the data that comes from interactions with their customers, but that it eventually expects to host Kustomer data on its infrastructure.

“Once the acquisition closes, we look forward to working closely with Facebook, where we will continue to serve our customers and work with our partners as part of the Facebook family,” Birnbaum wrote in a blog post. “With our complementary capabilities, we will be able to help more people benefit from customer service that is faster, richer and available whenever and however they need it — via phone, email, text, web chat or messaging. In particular, we look forward to enhancing the messaging experience which is one of the fastest growing ways for people and businesses to engage.”

Sign up for Funding Weekly to start your week with VB’s top funding stories.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Developing a backup plan

November 30, 2020   BI News and Info
SimpleTalk Developing a backup plan

The most important task for a DBA is to be able to recover a database in the event of a database becoming corrupted. Corrupted databases can happen for many different reasons. The most common corruption problem is from a programming error. But databases can also be corrupted by hardware failures. Regardless of how a database becomes corrupt, a DBA needs to have a solid backup strategy to be able to restore a database, with minimal data loss. In this article, I will discuss how to identify backup requirements for a database, and then how to take those requirements to develop a backup strategy.

Why develop a backup plan?

You might be wondering why you need to develop a backup plan. Can’t a DBA just implement a daily backup of each database and call it good? Well, that might work, but it doesn’t consider how an application uses a database. If you have a database that is only updated with a nightly batch process, then having a daily backup of the database right after the nightly update process might be all that you need. But what if you had a database that was updated all day long from some online internet application. If you have only one backup daily for a database that gets updated all day online, then you might lose up to a day’s worth of online transactions if it was to fail right before the next daily backup. Losing a day’s worth of transaction most likely would be unacceptable. Therefore, to ensure minimal data loss occurs when restoring a database, the backup and recovery requirements should be identified first before building a backup solution for a database.

Identifying backup and recovery requirements

Each database may have different backup and recovery requirements. When discussing backup and recovery requirements for a database, there are two different types of requirements to consider. The first requirement is how much data can be lost in the event of a database becoming corrupted. Knowing how much data can be lost will determine the types of database backups you need to take, and how often you take those backups. This requirement is commonly called the recovery point objective (RPO).

The second backup requirement to consider is how long it will take to recover a corrupted database. This requirement is commonly called the recovery time objective (RTO). The RTO requirement identifies how long the database can be down while the DBA is recovering the database. When defining the RTO, make sure to consider more than just how long it takes to restore the databases. Other tasks take time and need to be considered. Things like identifying which backup files need to be used, finding the backup files, building the restore script/process, and communicating with customers.

A DBA should not identify the RTO and RPO in a vacuum. The DBA should consult each application owner to set the RTO and RPO requirements for each database the application uses. The customers are the ones that should drive the requirements for RTO and RPO, with help from the DBA of course. Once the DBA and the customer have determined the appropriate RTO and RPO then, the DBA can develop the backups needed to meet these requirements.

Types of backups to consider

There are a number of different backup types you could consider. See my previous article for all the different backup types. Of those different backup types, there are three types of backups that support most backup and recovery strategies. Those types are Full, Differential, and Transaction log.

The Full backup, as it sounds, is a backup that copies the entire database off to a backup device. The backup will contain all the used data pages for the database. A full backup can be used to restore the entire database to the point in time that the full backup completed. I say completed because, if update commands are being run against the database at the time the backup is running, then they are included in the backup. Therefore, when you restore from a full backup, you are restoring a database to the point-in-time that the database backup completes.

A differential backup is a backup that copies all the changes since the last full backup off to a backup device. Differential backups are useful for large databases, where only a small number of updates have been performed since the full backup. Differential backups will run faster and take up less space on the backup device. A differential backup can’t be used to restore a database by itself. The differential backup is used in conjunction with a full backup to restore a database to the point in time that the differential backup completed. This means the full backup is restored first, then followed by restoring the differential backup.

The last type of backup is a transaction log backup. A transaction log backup copies all the transaction in the transaction log file to a backup device. It also removes any completed transactions from the transaction log to keep it from growing out of control. A transaction log backup, like a differential type backup, can’t be used by itself to restore a database. It is used in conjunction with a full backup, and possibly a differential backup to restore a database to a specific point-in-time. The advantages of having a transaction log backup are you can tell the restore process to stop at any time during the transaction log backup. By using the stop feature, you can restore a database right up to the moment before a database got corrupted. Typically, transaction logs are taken frequently, so there might be many transaction log backups taken between each full or differential backup. Transaction log backups are beneficial for situations when there is a requirement of minimal data loss in the event of a database becoming corrupted.

Developing a backup strategy for a database

When determining a backup plan for a database, you need to determine how much data can be lost and how long it takes to recover the database. This is where the RTO and RPO come in to determine which types of database backups should be taken. In the sections below, I will outline different database usage situations and then discuss how one or more of the backup types could be used to restore the database to meet the application owner’s RTO and RPO requirements.

Scenario #1: Batch updates only

When I say “Batch Updates Only”, I am referring to a database that is only updated using a batch process. Meaning it is not updated online or by using an ad hoc update processes. One example of this type of database is a database that receives updates in a flat-file format from a third-party source on a schedule. When a database receives updates via a flat-file, those updates are applied to the database using a well-defined update process. The update process is typically on a schedule to coincide with when the flat-file is received from the third-party source. In this kind of update situation, the customer would have an RPO that would be defined something like this: “In the event of a corrupted database, the database would need to be restored to after the last batch update process”. And would have an RTO set to something like this: “The restore process needs to be completed within X number of hours”.

When a database is only updated with a batch process, that is run on a schedule, all you need is to have a full back up right after the database has been updated. By doing this, you can recover to a point in time right after the database has been updated. Using the full backup only will meet the RPO. Since the time needed to restore a full backup is about the same time as it takes to backup, the RTO needs to be at least as long as it takes to run a restore process, plus a little more time for organizing and communicating a restore operation.

Scenario #2 – Batch updates only, with short backup window

This scenario is similar to the last scenario, but in this situation, there is very little time to take a backup after the batch processing completes. The time it takes to back up a database is directly proportional to the amount of data that needs to be backed up. If the time for a backup is short, it might be too short to take a full back up every time the database is updated. This might be the case when the database is very, very large. If there isn’t time to do a full database backup and the amount of data updated is small, then a differential backup would be a good choice to meet the RPO/RTO requirements. With a differential backup, only the updates since the last full backup are copied to the backup device. Because only the updates are backed up and not the entire database, a differential backup can run much faster than a full backup. Keep in mind, to restore a differential backup, you must first restore the full backup. In this situation, a full backup needs to be taken periodically with differential backups being taken in between the full backups. A common schedule for this would be to take the full backup when there is a large batch window, like on a Sunday when there is no batch processing, and then differential backups during those days when the batch window is short.

Scenario #3 – Ad hoc batch updates only

Some databases are not updated on a schedule but instead are updated periodically but only by an ad hoc batch update process that is manually kicked off. In this situation, there are a couple of different ways of handling backing up of databases that fall into this category. The first one is just routinely to run full database backups on a schedule. The second is to trigger a backup as the last step of the ad hoc batch update process.

A routine scheduled full backup is not ideal because the backups may or may not be run soon after the ad hoc batch update process. When there is a period of time between the ad hoc process and the scheduled full backup, the database is vulnerable to data loss should the database become corrupted for some reason before the full backup is taken. In order to minimize the time between the ad hoc update and the database backup, it would be better to add a backup command to the end of the ad hoc update process. This way, there is a backup soon after the ad hoc process, which minimizes the timeframe for when data could be lost. Additionally, by adding a backup command to the ad hoc update process, you potentially take fewer backups, which reduces the processing time and backup device space, over a routine backup process.

Scenario #4 – Online updates during business hours

In this scenario, the database gets updates from online transactions, but these online transactions are only run during business hours, say 8 AM to 5 PM. Outside of regular business hours, the database is not updated. In this situation, you might consider a combination of two different types of backups: Full and Transaction log backups. The full backup would be run off-hours, meaning after 5 PM and before 8 AM. The transaction log backups will be used during business hours to back up the online transactions shortly after these transactions have been made. In this situation, you need to review the RPO to determine how often to run transaction log backups. The shorter the RPO, the more often you need to take transaction log backups. For example, suppose a customer says they can lose no more than an hour worth of transactions, then you need to run a transaction log backup every hour between the hours of 8 AM and 5 PM.

Scenario #5 – Online updates 24×7

Some databases are accessed every day all day. This is very similar to Scenario #4, but in this case, the database is accessed and updated online 24×7. To handle backup and recovery in this situation, you would take a combination of full and differential backups along with transaction log backups.

With a database that is updated 24×7, you want to run the full and differential backups at times when the databases have the least number of online updates happening. By doing this, the performance impact caused by the backups will be minimized. There are way too many different database situations to tell you exactly how often a full or differential backup should be taken. I would recommend you try to take a full backup or differential backup daily if that is possible. By doing it daily, you will have fewer backup files involved in your recovery process.

The transaction log backups are used to minimize data loss. Like scenario #4, the frequency of transaction log backups is determined by the RPO requirements. Assuming that the customer can lose one hour’s worth of transitions, then transaction log backup would need to be run hourly, all day, every day to cover the 24×7 online processing.

Developing a backup plan

It is important for a DBA to work with the application owners to identify the backup and recovery requirements for their databases. The application owners determine how much data they can lose (RPO), and how long the database can be down while it is being recovered (RTO). Once the RTO and RPO requirements are defined, the DBA can then develop a backup plan that aligns with these requirements.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Banana for scale

November 30, 2020   Humor

Posted by Krisgo

About Krisgo

I’m a mom, that has worn many different hats in this life; from scout leader, camp craft teacher, parents group president, colorguard coach, member of the community band, stay-at-home-mom to full time worker, I’ve done it all– almost! I still love learning new things, especially creating and cooking. Most of all I love to laugh! Thanks for visiting – come back soon icon smile Banana for scale


Let’s block ads! (Why?)

Deep Fried Bits

Read More

Robotics researchers propose AI that locates and safely moves items on shelves

November 29, 2020   Big Data

When it comes to customer expectations, the pandemic has changed everything

Learn how to accelerate customer service, optimize costs, and improve self-service in a digital-first world.

Register here

A pair of new robotics studies from Google and the University of California, Berkeley propose ways of finding occluded objects on shelves and solving “contact-rich” manipulation tasks like moving objects across a table. The UC Berkeley research introduces Lateral Access maXimal Reduction of occupancY support Area (LAX-RAY), a system that predicts a target object’s location, even when only a portion of that object is visible. As for the Google-coauthored paper, it proposes Contact-aware Online COntext Inference (COCOI), which aims to embed the dynamics properties of physical things in an easy-to-use framework.

While researchers have explored the robotics problem of searching for objects in clutter for quite some time, settings like shelves, cabinets, and closets are a less-studied area, despite their wide applicability. (For example, a service robot at a pharmacy might need to find supplies from a medical cabinet.) Contact-rich manipulation problems are just as ubiquitous in the physical world, and humans have developed the ability to manipulate objects of various shapes and properties in complex environments. But robots struggle with these tasks due to the challenges inherent in comprehending high-dimensional perception and physics.

The UC Berkeley researchers, working out of the university’s AUTOLab department, focused on the challenge of finding occluded target objects in “lateral access environments,” or shelves. The LAX-RAY system comprises three lateral access mechanical search policies. Called “Uniform,” “Distribution Area Reduction (DAR),” and “Distribution Area Reduction over ‘n’ steps (DER-n),” they compute actions to reveal occluded target objects stored on shelves. To test the performance of these policies, the coauthors leveraged an open framework — The First Order Shelf Simulator (FOSS) — to generate 800 random shelf environments of varying difficulty. Then they deployed LAX-RAY to a physical shelf with a Fetch robot and an embedded depth-sensing camera, measuring whether the policies could figure out the locations of objects accurately enough to have the robot push those objects.

 Robotics researchers propose AI that locates and safely moves items on shelves

The researchers say the DAR and DER-n policies showed strong performance compared with the Uniform policy. In a simulation, LAX-RAY achieved 87.3% accuracy, which translated to about 80% accuracy when applied to the real-world robot. In future work, the researchers plan to investigate more sophisticated depth models and the use of pushes parallel to the camera to create space for lateral pushes. They also hope to design pull actions using pneumatically activated suction cups to lift and remove occluding objects from crowded shelves.

In the Google work, which had contributions from researchers at Alphabet’s X, Stanford, and UC Berkeley, the coauthors designed a deep reinforcement learning method that takes multimodal data and uses a “deep representative structure” to capture contact-rich dynamics. COCOI taps video footage and readings from a robot-mounted touch sensor to encode dynamics information into a representation. This allows a reinforcement learning algorithm to plan with “dynamics-awareness” that improves its robustness in difficult environments.

The researchers benchmarked COCOI by having both a simulated and real-world robot push objects to target locations while avoiding knocking them over. This isn’t as easy as it sounds; key information couldn’t be easily extracted from third-angle perspectives, and the task dynamics properties weren’t directly observable from raw sensor information. Moreover, the policy needed to be effective for objects with different appearances, shapes, masses, and friction properties.

 Robotics researchers propose AI that locates and safely moves items on shelves

The researchers say COCOI outperformed a baseline “in a wide range of settings” and dynamics properties. Eventually, they intend to extend their approach to pushing non-rigid objects, such as pieces of cloth.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Dude! Where’s my Trace?

November 29, 2020   BI News and Info

 While an administrator can turn on traces, it’s also possible for a developer to put some tracing commands in his or her WebFOCUS code.

When WebFOCUS traces are turned on, you can display them as comments within your output’s HTML — something called “client-side tracing.” If you pick an output that does not generate an HTML page (e.g., PDF) then you will not be able to see the trace.

While most WebFOCUS environmental settings are basically toggle switches with ON and OFF settings, traces are different. There is a TRACEON setting as well as one for TRACEOFF. For each, you identify the trace component that you want to turn on/off. For example, I may want to see the SQL statements being generated from the 4GL commands; I need to TRACEON that particular SQL component (called STMTRACE — comparable to the FSTRACE4 for FOCUS).

If you want to query the current environmental settings, you can use “?” for the trace component value. There is one setting to see the trace options turned on (SET TRACEON = ?) and another for those turned off (SET TRACEOFF = ?). With a syntax inconsistent with other settings, the WebFOCUS tracing feature is less than intuitive.

See the WebFOCUS Server Console diagnostics custom-filtering page for a full list of the trace components. Relational data adapter traces include:

  • SQLDI (like FSTRACE) – SQL physical layer (BX, BY messages)
  • SQLAGGR (like FSTRACE3) – optimization information (BR messages)
  • STMTRACE (like FSTRACE4) – SQL statements (AE, AF messages)
  • SQLCALL – exchange between physical and logical layers of the data adapter (BW messages)

For non-relational interfaces, you have:

  • ADBSIN (Adabas) – has 4 trace levels
  • IDMS – has 2 levels
  • IMS – has 4 levels
  • M204
  • Others (Nomad, Millenium, Supra/Total, CA-Datacom)
  • Proprietary traces for IBI developers (IBITROUT)

Some components do not seem to be made for HTML tracing — their messages are not formatted properly for HTML standards. For example, NWH, NWH2, and CEH components may corrupt your HTML output. The R1H communications component may have a glitch — it starts two WebFOCUS agents which never end. So, do not tinker with traces on the production server.

Also, instead of identifying a specific trace component, you can say “ALL” — that is fine for turning OFF all traces, but you probably should not turn ON all traces.

In addition to the trace component, there is an optional parameter (separated by a forward slash) for the “Trace Level” as some components can display different levels of details. If you omit the trace level, you get all traces for that component. Setting the trace level to 1 will give you the top level of the details; 0 will turn it off. For the STMTRACE, you can specify the RDBMS adapter as the trace level, e.g., “STMTRACE/DB2/CLIENT”

Another optional parameter (again separated by a forward slash) is the location of the trace output. Frankly, I question the quality of this feature — traces are okay going into the HTML comments, but do not seem to go to physical file locations properly (there is an additional setting of TRACEUSER to identify the filename). To send the trace output to the HTML in your browser, use “CLIENT” for this parameter.

Here is the syntax: 

SET TRACEON = component [/tracelevel [ /filename]]
SET TRACEOFF = component [/tracelevel [ /filename]]

Here are some examples of the trace settings:

SET TRACEOFF = ALL(turn all the traces off)
SET TRACEOFF = ?(show the inactive trace settings)

SET TRACEON = MFP//CLIENT(MFD parsing; displays FG messages)

SET TRACEON = PRH//CLIENT(FEX parsing; displays AG messages)
SET TRACEON = STMTRACE//CLIENT(SQL calls; displays AE, AF messages) 
SET TRACEON = SQLDI//CLIENT(SQL physical layer; displays BX, BY messages)
SET TRACEON = SQLCALL//CLIENT(exchange between SQL physical & logical layer; displays BW messages)
SET TRACEON = SQLAGGR//CLIENT(SQL optimization messages; displays BR messages)
SET TRACEON = ESSBASE//CLIENT(Essbase calls; displays CE, CF, CG, CH messages)
SET TRACEUSER = ON(turn on the user tracing)

In FOCUS, to see your generated SQL and optimization messages, you would have allocated some FSTRACE files and looked at the contents afterwards. To do something comparable in WebFOCUS, you use these commands and look in your HTML output:

SET TRACEOFF = ALL
SET TRACEON = STMTRACE//CLIENT
SET TRACEON = SQLAGGR//CLIENT
SET XRETRIEVAL = OFF
SET TRACEUSER = ON

Note: The XRETRIEVAL setting turns off the actual execution of the procedure, so you can see the SQL without actually running it and waiting for the answer set.

So while this WebFOCUS tracing feature may test your nerve and skill, it can be very useful to see what is going on under the covers.

(originally posted 2008 Dec 21)

Let’s block ads! (Why?)

Business Intelligence Software

Read More

TYPICAL LIBERAL

November 29, 2020   Humor

Let’s block ads! (Why?)

ANTZ-IN-PANTZ ……

Read More
« Older posts
  • Recent Posts

    • Tips on using Advanced Find in Microsoft Dynamics 365
    • You don’t tell me where to sit.
    • Why machine learning strategies fail
    • Why Some CRM Initiatives Fail
    • Lucas Brothers To Write And Star In Semi-Autobiographical Comedy For Universal
  • Categories

  • Archives

    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited