• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: less

AI progress depends on us using less data, not more

February 14, 2021   Big Data
 AI progress depends on us using less data, not more

Data: Meet ad creative

From TikTok to Instagram, Facebook to YouTube, and more, learn how data is key to ensuring ad creative will actually perform on every platform.

Register Now


In the data science community, we’re witnessing the beginnings of an infodemic — where more data becomes a liability rather than an asset. We’re continuously moving towards ever more data-hungry and more computationally expensive state-of-the-art AI models. And that is going to result in some detrimental and perhaps counter-intuitive side-effects (I’ll get to those shortly).

To avoid serious downsides, the data science community has to start working with some self-imposed constraints: specifically, more limited data and compute resources.

A minimal-data practice will enable several AI-driven industries — including cyber security, which is my own area of focus — to become more efficient, accessible, independent, and disruptive.

When data becomes a curse rather than a blessing

Before we go any further, let me explain the problem with our reliance of increasingly data-hungry AI algorithms. In simplistic terms, AI-powered models are “learning” without being explicitly programed to do so, through a trial and error process that relies on an amassed slate of samples. The more data points you have – even if many of them seem indistinguishable to the naked eye, the more accurate and robust AI-powered models you should get, in theory.

In search of higher accuracy and low false-positive rates, industries like cyber security — which was once optimistic about its ability to leverage the unprecedented amount of data that followed from enterprise digital transformation — are now encountering a whole new set of challenges:

1. AI has a compute addiction. The growing fear is that new advancements in experimental AI research, which frequently require formidable datasets supported by an appropriate compute infrastructure, might be stemmed due to compute and memory constraints, not to mention the financial and environmental costs of higher compute needs.

While we may reach several more AI milestones with this data-heavy approach, over time, we’ll see progress slow. The data science community’s tendency to aim for data-“insatiable” and compute-draining state-of-the-art models in certain domains (e.g. the NLP domain and its dominant large-scale language models) should serve as a warning sign. OpenAI analyses suggest that the data science community is more efficient at achieving goals that have already been obtained but demonstrate that it requires more compute, by a few orders of magnitude, to reach new dramatic AI achievements. MIT researchers estimated that “three years of algorithmic improvement is equivalent to a 10 times increase in computing power.” Furthermore, creating an adequate AI model that will withstand concept-drifts over time and overcome “underspecification” usually requires multiple rounds of training and tuning, which means even more compute resources.

If pushing the AI envelope means consuming even more specialized resources at greater costs, then, yes, the leading tech giants will keep paying the price to stay in the lead, but most academic institutions would find it difficult to take part in this “high risk – high reward” competition. These institutions will most likely either embrace resource-efficient technologies or persue adjacent fields of research. The significant compute barrier might have an unwarranted cooling effect on academic researchers themselves, who might choose to self-restrain or completely refrain from persuing revolutionary AI-powered advancements.

2. Big data can mean more spurious noise. Even if you assume you have properly defined and designed an AI model’s objective and architecture and that you have gleaned, curated, and adequately prepared enough relevant data, you have no assurance the model will yield beneficial and actionable results. During the training process, as additional data points are consumed, the model might still identify misleading spurious correlations between different variables. These variables might be associated in what seems to be a statistically significant manner, but are not causally related and so don’t serve as useful indicators for prediction purposes.

I see this in the cyber security field: The industry feels compelled to take as many features as possible into account, in the hope of generating better detection and discovery mechanisms, security baselines, and authentication processes, but spurious correlations can overshadow the hidden correlations that actually matter.

3. We’re still only making linear progress. The fact that large-scale data-hungry models perform very well under specific circumstances, by mimicking human-generated content or surpassing some human detection and recognition capabilities, might be misleading. It might obstruct data practitioners from realizing that some of the current efforts in applicative AI research are only extending existing AI-based capabilities in a linear progression rather than producing real leapfrog advancements — in the way organizations secure their systems and networks, for example.

Unsupervised deep learning models fed on large datasets have yielded remarkable results over the years — especially through transfer learning and generative adversarial networks (GANs). But even in light of progress in neuro-symbolic AI research, AI-powered models are still far from demonstrating human-like intuition, imagination, top-down reasoning, or artificial general intelligence (AGI) that could be applied broadly and effectively on fundamentally different problems — such as varying, unscripted, and evolving security tasks while facing dynamic and sophisticated adversaries.

4. Privacy concerns are expanding. Last but not least, collecting, storing, and using extensive volumes of data (including user-generated data) — which is especially valid for cyber security applications — raises a plethora of privacy, legal, and regulatory concerns and considerations. Arguments that cyber security-related data points don’t carry or constitute personally identifiable information (PII) are being refuted these days, as the strong binding between personal identities and digital attributes are extending the legal definition PII to include, for example, even an IP address.

How I learned to stop worrying and enjoy data scarcity

In order to overcome these challenges, specifically in my area, cyber security, we have to, first and foremost, align expectations.

The unexpected emergence of Covid-19 has underscored the difficulty of AI models to effectively adapt to unseen, and perhaps unforeseeable, circumstances and edge-cases (such as a global transition to remote work), especially in cyberspace where many datasets are naturally anomalous or characterized by high variance. The pandemic only underscored the importance of clearly and precisely articulating a model’s objective and adequately preparing its training data. These tasks are usually as important and labor-intensive as accumulating additional samples or even choosing and honing the model’s architecture.

These days, the cyber security industry is required to go through yet another recalibration phase as it comes to terms with its inability to cope with the “data overdose,” or infodemic, that has been plaguing the cyber realm. The following approaches can serve as guiding principles to accelerate this recalibration process, and they’re valid for other areas of AI, too, not just cyber security:

Algorithmic efficacy as top priority. Taking stock of the plateauing Moore’s law, companies and AI researchers are working to ramp-up algorithmic efficacy by testing innovative methods and technologies, some of which are still in a nascent stage of deployment. These approaches, which are currently applicable only to specific tasks, range from the application of Switch Transformers, to the refinement of Few Shots, One-Shot, and Less-Than-One-Shot Learning methods.

Human augmentation-first approach. By limiting AI models to only augment the security professional’s workflows and allowing human and artificial intelligence to work in tandem, these models could be applied to very narrow, well-defined security applications, which by their nature require less training data. These AI guardrails could be manifested in terms of human intervention or by incorporating rule-based algorithms that hard-code human judgment. It is no coincidence that a growing number of security vendors favor offering AI-driven solutions that only augment the human-in-the-loop, instead of replacing human judgment all together.

Regulators could also look favorably on this approach, since they look for human accountability, oversight, and fail-safe mechanisms, especially when it comes to automated, complex, and “black box” processes. Some vendors are trying to find middle ground by introducing active learning or reinforcement learning methodologies, which leverage human input and expertise to enrich the underlining models themselves. In parallel, researchers are working on enhancing and refining human-machine interaction by teaching AI models when to defer a decision to human experts.

Leveraging hardware improvements. It’s not yet clear whether dedicated, highly optimized chip architectures and processors alongside new programming technologies and frameworks, or even completely different computerized systems, would be able to accommodate the ever-growing AI computation demand. Tailor-made for AI applications, some of these new technological foundations that closely bind and align specialized hardware and software, are more capable than ever of performing unimaginable volumes of parallel computations, matrix multiplications, and graph processing.

Additionally, purpose-built cloud instances for AI computation, federated learning schemes, and frontier technologies (neuromorphic chips, quantum computing, etc.) might also play a key role this effort. In any case, these advancements alone are not likely to curb the need for algorithmic optimization that might “outpace gains from hardware efficiency.” Still, they could prove to be critical, as the ongoing semiconductor battle for AI dominance has yet to produce a clear winner.

The merits of data discipline

Up to now, conventional wisdom in data science has usually dictated that when it comes to data, the more you have, the better. But we’re now beginning to see that the downsides of data-hungry AI models might, over time, outweigh their undisputed advantages.

Enterprises, cyber security vendors, and other data practitioners have multiple incentives to be more disciplined in the way they collect, store, and consume data. As I’ve illustrated here, one incentive that should be top of mind is the ability to elevate the accuracy and sensitivity of AI models while alleviating privacy concerns. Organizations that embrace this approach, which relies on data dearth rather than data abundance, and exercise self-restraint, may be better equipped to drive more actionable and cost-effective AI-driven innovation over the long haul.

Eyal Balicer is Senior Vice President for Global Cyber Partnership and Product Innovation at Citi.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers claim that AI-translated text is less ‘lexically’ rich than human translations

February 3, 2021   Big Data
 Researchers claim that AI translated text is less ‘lexically’ rich than human translations

Human interpreters make choices unique to them, consciously or unconsciously, when translating one language into another. They might explicate, normalize, or condense and summarize, creating fingerprints known informally as “translationese.” In machine learning, generating accurate translations has been the main objective thus far. But this might be coming at the expense of translation richness and diversity.

In a new study, researchers at Tilburg University and the University of Maryland attempt to quantify the lexical and grammatical diversity of “machine translationese” — i.e., the fingerprints made by AI translation algorithms. They claim to have found a “quantitatively measurable” difference between the linguistic richness of machine translation systems’ training data and their translations, which could be a product of statistical bias.

The researchers looked a range of different machine learning model architectures including Transformer, neural machine translation, long short-term memory networks, and phrase-based statistical machine translation. In experiments, they tasked each with translating between English, French, and Spanish and compared the original text with the translations using 9 different metrics.

The researchers report that in experiments, the original training data — a collection of reference translations — always had a higher lexical diversity than the machine translations regardless of the type of model used. In other words, the reference translations were consistently more diverse in terms of vocabulary and synonym usage than the translations from the models.

The coauthors point out that while the loss of lexical diversity could be a desirable side effect of machine translation systems (in terms of simplification or consistency), the loss of morphological richness is problematic as it can prevent systems from making grammatically correct choices. Bias can emerge, too, with machine translation systems having a stronger negative impact in terms of diversity and richness on morphologically richer languages like Spanish and French.

“As [machine translation] systems have reached a quality that is (arguably) close to that of human translations and as such are being used widely on a daily basis, we believe it is time to look into the potential effects of [machine translation] algorithms on language itself,” the researchers wrote in a paper describing their work. “All [of our] metrics indicate that the original training data has more lexical and morphological diversity compared to translations produced by the [machine translation] systems … If machine translationese (and other types of ‘NLPese’) is a simplified version of the training data, what does that imply from a sociolinguistic perspective and how could this affect language on a longer term?”

The coauthors propose no solutions to the machine translation problems they claim to have uncovered. However, they believe their metrics could drive future research on the subject.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

New framework can train a robotic arm on 6 grasping tasks in less than an hour

December 17, 2020   Big Data

Advances in machine learning have given rise to a range of robotics capabilities including grasping, pushing, pulling, and other object manipulation skills. However, general-purpose algorithms to date have been extremely sample-inefficient, limiting their applicability to the real world. Spurred on by this, researchers at the University of California, Berkely developed a framework — Framework for Efficient Robotic Manipulation (FERM) — that leverages cutting-edge techniques to achieve what they claim is “extremely” sample-efficient robotic manipulation algorithm training. The coauthors say that, given only 10 demonstration amounting to 15 to 50 minutes of real-world training time, a single robotic arm can learn to reach, pick, move, and pull large objects or flip a switch and open a drawer using FERM.

McKinsey pegs the robotics automation potential for production occupations at around 80%, and the pandemic is likely to accelerate this shift. A report by the Manufacturing Institute and Deloitte found that 4.6 million manufacturing jobs will need to be filled over the next decade, and challenges brought on by physical distancing measures and a sustained uptick in ecommerce activity have stretched some logistics operations to the limit. The National Association of Manufacturers says 53.1% of manufacturers anticipate a change in operations due to the health crisis, with 35.5% saying they’re already facing supply chain disruptions.

FERM could help accelerate the shift toward automation by making “pixel-based” reinforcement learning — a type of machine learning in which algorithms learn to complete tasks from recorded demonstrations — more data-efficient. As the researchers explain in a paper, FERM first collects a small number of demonstrations and stores them in a “replay buffer.” An encoder machine learning algorithm pretrains on the demonstration data contained within the replay buffer. Then, a reinforcement learning algorithm in FERM trains on images “augmented” with data generated both by the encoder and the initial demonstrations.

According to the researchers, FERM is easy to assemble in that it only requires a robot, a graphics card, two cameras, a handful of demonstrations, and a reward function that guides the reinforcement learning algorithm toward a goal. In experiments, they say that FERM enabled an xArm to learn six tasks within 25 minutes of training time (corresponding to 20 to 80 episodes of training) with an average success rate of 96.7%. The arm could even generalize to objects not seen during training or demonstrations and deal with obstacles blocking its way to goal positions.

 New framework can train a robotic arm on 6 grasping tasks in less than an hour

“To the best of our knowledge, FERM is the first method to solve a diverse set of sparse-reward robotic manipulation tasks directly from pixels in less than one hour,” the researchers wrote. “Due to the limited amount of supervision required, our work presents exciting avenues for applying reinforcement learning to real robots in a quick and efficient manner.”

Open source frameworks like FERM promise to advance the state of the art in robotic manipulation, but there remain questions about how to measure progress. As my colleague Khari Johnson writes, metrics used to measure progress in robotic grasping can vary based on the task. For example, for robots operating in a mission-critical environment like space, accuracy matters above all.

“Under certain circumstances, if we have nice objects and you have a very fast robot, you can get there [human picking rates],” roboticist Ken Goldberg told VentureBeat in a previous interview. “But they say humans are like 650 per hour; that’s an amazing level. It’s very hard to beat humans. We’re very good. We’ve evolved over millions of years.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

SuiteSuccess for SuiteCommerce: Ecommerce in 30 Days or Less

March 14, 2020   NetSuite
gettyimages 946919854 SuiteSuccess for SuiteCommerce: Ecommerce in 30 Days or Less

Posted by Austin Caldwell, Senior Product Marketing Manager

Launching a new ecommerce website may seem daunting. Whether businesses are building their first online store or replatforming from an existing system, creating a feature-rich ecommerce website from start to finish takes time, energy and resources. Organizations typically spend five to eight months or more determining which ecommerce solution to use, drafting the system architecture, designing how it will look and actually developing the storefront and integrating third party extensions.

The most nerve-racking aspect is there are no guarantees everything will come together fully functioning or on schedule. It is not uncommon for merchants to abandon development mid-project to pivot to another system due to technology gaps or have their launch date slip because their team was too busy focusing on day-to-day operations in their old system. These sorts of barriers can scare off businesses from even pursuing a new ecommerce website. Luckily, NetSuite users are able to mitigate these fears with a unified solution that is quick to implement.

SuiteSuccess for SuiteCommerce makes it easy for merchants to add B2B and B2C ecommerce capabilities to their NetSuite solution. Utilizing a rapid implementation model based on the experience from thousands of successful ecommerce deployments, SuiteSuccess for SuiteCommerce accelerates time to market by being capable of launching a full-featured online store in 30 days or less.

With SuiteCommerce a part of NetSuite’s suite of applications, ecommerce is natively unified with financials, order and inventory management and CRM—providing a single view of customer, order, inventory and other real-time data that makes seamless omnichannel experiences possible. By unifying their front and back-end systems, organizations are able to remove manual processes, reduce integration costs and focus on driving sales.

NetSuite has recently added capabilities to the SuiteSuccess for SuiteCommerce offering, plus additional onboarding support to assist post-implementation.

Enriching the online experience 

NetSuite’s SuiteSuccess approach makes replatforming painless by migrating catalog, website images and product descriptions to SuiteCommerce with a simple and straightforward process. NetSuite’s team has SEO expertise to provide valuable guidance to help maintain optimal search rankings, so businesses are ready to grow from the start.

SuiteCommerce includes a host of capabilities including pre-built responsive design themes, powerful site search, easy to use site management tools and other features to engage shoppers, increase conversions and improve average order values.

In addition to the capabilities included with SuiteCommerce, the SuiteSuccess implementation provides additional features designed to address the needs of business buyers and direct consumers that now include:

  • Product Comparisons – Create a comparison table of multiple products for shoppers to review feature differences.
  • Back-In-Stock Notification – Encourage shoppers to sign up to receive back-in-stock notifications for items that were temporarily unavailable.
  • Infinite Scroll – Replace search results pagination with continuous scrolling.
  • Item Badges – Display a visual icon identifying whether item is New, On Sale or a Best Seller on category and product detail pages.
  • Personalized Catalog Views – Publish different products, pricing and inventory catalogs to distinct business buyers based on groups and segments.
  • Gift Certificates – Allow shoppers to check their gift certificate balance in their online account.
  • Guest Order Status – Allow shoppers to quickly see the status of their orders without creating or logging into an account.
  • Responsive Email Templates – Optimize order status email templates (confirmed, approved, cancelled and shipped) with responsive design to control the layout across different device sizes. 

Ensuring Success After Go-Live

After NetSuite Professional Services has implemented a business’ new online store, administrators will receive additional onboarding services for up to six months to help navigate their new ecommerce solution. This support provides web store managers, sales representatives, marketers and fulfillment coordinators the one-on-one assistance they need to get comfortable with their new SuiteCommerce store front. Onboarding services will include:

  • Onboarding Management: Facilitate business knowledge transfer from NetSuite Professional Services team to better guide end-users post implementation. Conduct recurring bi-weekly account status meetings to discuss business goals and review key performance indicators.
  • Expert Advice: Provide strategic guidance on application usage, configuration, optimization and maintenance. Administrators will receive guidance on day-to-day site management and how to further extend their store front with SuiteApps, third-party integrations, workflows, digital marketing, promotions and order management.
  • Guided Support: Address “how do I” questions with contextual assistance and provide step-by-step instructions for unique solutions. Coordinate with NetSuite and business stakeholders to assist with open issues and escalated support cases as needed.

An Ecommerce Site in 30 Days 

With SuiteSuccess for SuiteCommerce, NetSuite delivers an ecommerce solution that is easy to implement, easy to manage and easy to enhance. Learn how B2B and B2C merchants are growing their business with SuiteSuccess for SuiteCommerce.

Posted on Fri, March 13, 2020
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

Read More

DeepMind’s MEMO AI solves novel reasoning tasks with less compute

January 31, 2020   Big Data
 DeepMind’s MEMO AI solves novel reasoning tasks with less compute

Can AI capture the essence of reasoning — that is, the appreciation of distant relationships among elements distributed across multiple facts or memories? Alphabet subsidiary DeepMind sought to find out in a study published on the preprint server Arxiv.org, which proposes an architecture — MEMO — with the capacity to reason over long distances. The researchers say that MEMO’s two novel components — the first of which introduces a separation between facts and memories stored in external memory, and the second of which employs a retrieval system that allows a variable number of “memory hops” before an answer is decided upon — enable it to solve novel reasoning tasks.

“[The hippocampus supports the] flexible recombination of single experiences in novel ways to infer unobserved relationships … called inferential reasoning,” wrote the coauthors of the paper. “Interestingly, it has been shown that the hippocampus is storing memories independently of each other through a process called pattern separation [to] minimize interference between experiences. A recent line of research sheds light on this … by showing that the integration of separated experiences emerges at the point of retrieval through a recurrent mechanism, [which] allows multiple pattern separated codes to interact and therefore support inference.”

DeepMind’s work, then, takes inspiration from this research to investigate and enhance inferential reasoning in machine learning models. Drawing on the neuroscience literature, they devised a procedurally generated task called paired associative inference (PAI) that’s meant to capture inferential reasoning by forcing AI systems to learn abstractions to solve previously unseen problems. They then architected MEMO — which, when given an input query, outputs a sequence of potential answers — with a preference for representations that minimize the necessary computation.

The researchers say MEMO retains a set of facts in memory and learns a projection paired with a mechanism that enables greater flexibility in the use of memories, and that it’s different from typical AI models because it adapts the amount of compute time to the complexity of the task. Taking a cue from a model of human associative memory called REMERGE, where the content retrieved from memory is recirculated as a new query and the difference between the content retrieved at different time steps in the recirculation process is used to calculate if the model has settled into a fixed point, MEMO outputs an action that indicates whether it wishes to continue computing and querying its memory or whether it’s able to answer a given task.

In tests, the DeepMind researchers compared MEMO with two baseline models, as well as the current state-of-the-art model, in Facebook AI Research’s bAbi suite (a set of 20 tasks for evaluating text understanding and reasoning). MEMO was able to achieve the highest accuracy on the PAI task, and it was the only architecture that successfully answered the most complex inference queries on longer sequences. Furthermore, MEMO required only three “hops” to solve a task compared with the best-performing baseline model’s 10 steps. And in another task that required the models to find the shortest path between two nodes given a graph of nodes, MEMO outperformed the baselines in more complex graphs by 20%.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Alphabet’s trash-sorting robots have reduced office waste contamination to ‘less than 5%’

November 22, 2019   Big Data

Alphabet’s X team for moonshot projects has been using robots to sort compost, recycling, and landfill waste in initial use at X company offices in recent months. The robots on wheels can drive to trash sites in the Mountain View, California office and sort recycling and trash using a combination of computer vision and a robotic arms.

The news today came as Alphabet unveiled the Everyday Robot Project, a moonshot to make robots augment human activity in the physical world in environments like the home or office the same way computers do in the virtual world. The Everyday Robot Project has been underway for years, as many members of the team that collaborates with Google AI joined Alphabet in 2015 or 2016.

“During the last few months, our robots have sorted thousands of pieces of trash and reduced our office’s waste contamination levels from 20% — which is what it is when people put objects in the trays — to less than 5%,” Alphabet X robotics project lead Hans Peter Brondmo said in a post today. “It will be years before the helpful robots we imagine are here, but we’re looking forward to sharing more robot adventures along the way.”

A number of machine learning techniques were used to create the system, including the use of synthetic data from virtual environments with reinforcement learning.

Last month, Google AI launched ROBEL, a benchmark and kit to create affordable and robust robotic systems.

Alphabet’s trash-sorting robot isn’t alone. AMP Robotics raised $ 16 million for its AI that automates trash sorting. There’s also Oscar the trash-sorting computer vision system being piloted on corporate campuses, and MIT’s system that uses touch to recognize recyclable materials like paper, plastic, and metal.

Companies like Google, Nvidia, and Facebook have gone deeper into robotics challenges in the past year as a way to create hardware that works in the world but also to create complex multimodal AI systems.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

WHEN ADS WERE LESS P.C. AND MORE ENGAGING

October 9, 2019   Humor


H/T: KIM

Let’s block ads! (Why?)

ANTZ-IN-PANTZ ……

Read More

WHEN ADS WERE LESS P.C. AND MORE ENGAGING

October 6, 2019   Humor


H/T: KIM

Let’s block ads! (Why?)

ANTZ-IN-PANTZ ……

Read More

WHEN ADS WERE LESS P.C. AND MORE ENGAGING

October 5, 2019   Humor


H/T: KIM

Let’s block ads! (Why?)

ANTZ-IN-PANTZ ……

Read More

WHEN ADS WERE LESS P.C. AND MORE ENGAGING

September 29, 2019   Humor


H/T: KIM

Let’s block ads! (Why?)

ANTZ-IN-PANTZ ……

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited