• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: human

AI models from Microsoft and Google already surpass human performance on the SuperGLUE language benchmark

January 6, 2021   Big Data
 AI models from Microsoft and Google already surpass human performance on the SuperGLUE language benchmark

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


In late 2019, researchers affiliated with Facebook, New York University (NYU), the University of Washington, and DeepMind proposed SuperGLUE, a new benchmark for AI designed to summarize research progress on a diverse set of language tasks. Building on the GLUE benchmark, which had been introduced one year prior, SuperGLUE includes a set of more difficult language understanding challenges, improved resources, and a publicly available leaderboard.

When SuperGLUE was introduced, there was a nearly 20-point gap between the best-performing model and human performance on the leaderboard. But as of early January, two models — one from Microsoft called DeBERTa and a second from Google called T5 + Meena — have surpassed the human baselines, becoming the first to do so.

Sam Bowman, assistant professor at NYU’s center for data science, said the achievement reflected innovations in machine learning including self-supervised learning, where models learn from unlabeled datasets with recipes for adapting the insights to target tasks. “These datasets reflect some of the hardest supervised language understanding task datasets that were freely available two years ago,” he said. “There’s no reason to believe that SuperGLUE will be able to detect further progress in natural language processing, at least beyond a small remaining margin.”

But SuperGLUE isn’t a perfect — nor a complete test of human language ability. In a blog post, the Microsoft team behind DeBERTa themselves noted that their model is “by no means” reaching the human-level intelligence of natural language understanding. They say this will require research breakthroughs — along with new benchmarks to measure them and their effects.

SuperGLUE

As the researchers wrote in the paper introducing SuperGLUE, their benchmark is intended to be a simple, hard-to-game measure of advances toward general-purpose language understanding technologies for English. It comprises eight language understanding tasks drawn from existing data and accompanied by a performance metric as well as an analysis toolkit.

The tasks are:

  • Boolean Questions (BoolQ) requires models to respond to a question about a short passage from a Wikipedia article that contains the answer. The questions come from Google users, who submit them via Google Search.
  • CommitmentBank (CB) tasks models with identifying a hypotheses contained within a text excerpt from sources including the Wall Street Journal and determining whether this hypothesis holds true.
  • Choice of plausible alternatives (COPA) provides a premise sentence about topics from blogs and a photography-related encyclopedia from which models must determine either the cause or effect from two possible choices.
  • Multi-Sentence Reading Comprehension (MultiRC) is a question-answer task where each example consists of a context paragraph, a question about that paragraph, and a list of possible answers. A model must predict which answers are true and false.
  • Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD) has models predict masked-out words and phrases from a list of choices in passages from CNN and the Daily Mail, where the same words or phrases might be expressed using multiple different forms, all of which are considered correct.
  • Recognizing Textual Entailment (RTE) challenges natural language models to identify whenever the truth of one text excerpt follows from another text excerpt.
  • Word-in-Context (WiC) provides models two text snippets and a polysemous word (i.e., word with multiple meanings) and requires them to determine whether the word is used with the same sense in both sentences.
  • Winograd Schema Challenge (WSC) is a task where models, given passages from fiction books, must answer multiple-choice questions about the antecedent of ambiguous pronouns. It’s designed to be an improvement on the Turing Test.

SuperGLUE also attempts to measure gender bias in models with Winogender Schemas, pairs of sentences that differ only by the gender of one pronoun in the sentence. However, the researchers note that Winogender has limitations in that it offers only positive predictive value: While a poor bias score is clear evidence that a model exhibits gender bias, a good score doesn’t mean the model is unbiased. Moreover, it doesn’t include all forms of gender or social bias, making it a coarse measure of prejudice.

To establish human performance baselines, the researchers drew on existing literature for WiC, MultiRC, RTE, and ReCoRD and hired crowdworker annotators through Amazon’s Mechanical Turk platform. Each worker, which was paid an average of $ 23.75 an hour, completed a short training phase before annotating up to 30 samples of selected test sets using instructions and an FAQ page.

Architectural improvements

The Google team hasn’t yet detailed the improvements that led to its model’s record-setting performance on SuperGLUE, but the Microsoft researchers behind DeBERTa detailed their work in a blog post published earlier this morning. DeBERTa isn’t new — it was open-sourced last year — but the researchers say they trained a larger version with 1.5 billion parameters (i.e., the internal variables that the model uses to make predictions). It’ll be released in open source and integrated into the next version of Microsoft’s Turing natural language representation model, which supports products like Bing, Office, Dynamics, and Azure Cognitive Services.

DeBERTa is pretrained through masked language modeling (MLM), a fill-in-the-blank task where a model is taught to use the words surrounding a masked “token” to predict what the masked word should be. DeBERTa uses both the content and position information of context words for MLM, such that it’s able to recognize “store” and “mall” in the sentence “a new store opened beside the new mall” play different syntactic roles, for example.

Unlike some other models, DeBERTa accounts for words’ absolute positions in the language modeling process. Moreover, it computes the parameters within the model that transform input data and measure the strength of word-word dependencies based on words’ relative positions. For example, DeBERTa would understand the dependency between the words “deep” and “learning” is much stronger when they occur next to each other than when they occur in different sentences.

DeBERTa also benefits from adversarial training, a technique that leverages adversarial examples derived from small variations made to training data. These adversarial examples are fed to the model during the training process, improving its generalizability.

The Microsoft researchers hope to next explore how to enable DeBERTa to generalize to novel tasks of subtasks or basic problem-solving skills, a concept known as compositional generalization. One path forward might be incorporating so-called compositional structures more explicitly, which could entail combining AI with symbolic reasoning — in other words, manipulating symbols and expressions according to mathematical and logical rules.

“DeBERTa surpassing human performance on SuperGLUE marks an important milestone toward general AI,” the Microsoft researchers wrote. “[But unlike DeBERTa,] humans are extremely good at leveraging the knowledge learned from different tasks to solve a new task with no or little task-specific demonstration.”

New benchmarks

According to Bowman, no successor to SuperGLUE is forthcoming, at least not in the near term. But there’s growing consensus within the AI research community that future benchmarks, particularly in the language domain, must take into account broader ethical, technical, and societal challenges if they’re to be useful.

For example, a number of studies show that popular benchmarks do a poor job of estimating real-world AI performance. One recent report found that 60%-70% of answers given by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were usually simply memorizing answers. Another study — a meta-analysis of over 3,000 AI papers — found that metrics used to benchmark AI and machine learning models tended to be inconsistent, irregularly tracked, and not particularly informative.

Part of the problem stems from the fact that language models like OpenAI’s GPT-3, Google’s T5 + Meena, and Microsoft’s DeBERTa learn to write humanlike text by internalizing examples from the public web. Drawing on sources like ebooks, Wikipedia, and social media platforms like Reddit, they make inferences to complete sentences and even whole paragraphs.

As a result, language models often amplify the biases encoded in this public data; a portion of the training data is not uncommonly sourced from communities with pervasive gender, race, and religious prejudices. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. This bias could be leveraged by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies that “radicalize individuals into violent far-right extremist ideologies and behaviors,” according to the Middlebury Institute of International Studies.

Most existing language benchmarks fail to capture this. Motivated by the findings in the two years since SuperGLUE’s introduction, perhaps future ones might.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Facing the crisis with the human spirit: Science and our good nature

April 19, 2020   CRM News and Info
julian wan dwac44fuv5o unsplash Facing the crisis with the human spirit: Science and our good nature

Why is this post up again: The new introduction

This has been my favorite blog post of all time. I think I first wrote it in 2014, then I put it up here on ZDNet in 2015, and it will be my introductory post for my new blog, called “The Science of Business, The Art of Life and Live from NY” (aka SBALLNY), which is coming to a website near you in the next three weeks I’d venture to say.

I’m putting it up once again.

The reason I’m doing it is it goes to human history and the efforts that human beings make in times of innovation and in times of crisis. In the midst of the current global pandemic, it never hurts to remember that we are an infinitely creative, innovative, and a good species. And that while this may be the first in our lifetimes and the first in 100 years, we have managed to survive several pandemics, even those well before the 1918 Spanish Flu, such as the Black Death. Following those, we have continued to flourish as a species and progress as a society. So, this may be unprecedented in our lifetime, but not in history.

While this blog post is focused on a person, one of the world’s greatest (and yet little known) scientists (Roger Bacon) and on a specific time in history (the 13th century), what it focuses in on is why we always can have hope and not despair when horrible events like this devastate the population. The entirety of this tome is centered on one passage (which you will read again very shortly):

The very hallmark of continued human social existence has been that each of us as a human, has an infinite capacity to create something that in some way, incrementally and on occasion profoundly impacts the continued existence of society and the human species. It’s happened frequently enough throughout history, with the right combinations of people and resources, to so far, ensure, at least for now, the continued existence and even flourishing of humanity and the cultures and society associated with it, with all their problems, glitches, denial of opportunities, errors of judgment and action, and even criminality. Despite the bad, we survive as a species and grow. Because the good always outweighs the bad, and over time, even if it doesn’t seem so, overcomes the bad. That tells you something — human beings, as a rule, are good, not evil, despite the cynics who would have you think otherwise. Complaining doesn’t solve problems — finding solutions to the problems solve them.

We’ve done this successfully throughout history and will do it again now. Human beings with all their strange behaviors are as a whole a noble lot. They are better at doing good than they are at doing evil, and history bears that out.

So, let’s take a break and a moment, and let me take you on a journey to the 13th century via the mind of Roger Bacon and through science and through a personal journey that I’ve been on for many decades, which, as I am now 70, have some peace with. I hope that you are willing to read this to the end. It’s very long, so maybe not. Either way. Please read what you can and reflect on the fact that we are good people, and good people will solve bad problems in a good way — said with science and data in mind, as well as emotions.


Introduction to Roger Bacon and the 13th century through my lens

Much as I like to think and act exuberantly in the celebration of the abundance of life, I have days where I recognize that I’m 64 years old. Some of those days, I embrace the fact. Some of those days I just feel it.

When I embrace the fact, I also embrace one of the things that the older among us can claim, that our younger brethren can’t yet. I can contemplate not just the present and my plans for the future, though I do that always, but also the legacy that I’d like to provide as I leave my footprints embedded in time.

In the course of mulling this over a few weeks ago, while recovering from vocal cord surgery, I began to think about something that often comes up from my storehouse of memories — the work I did many years ago on 13th-century science and culture, and in particular, a medieval friar of the Franciscan Order, Roger Bacon. He’s someone who, when I was writing my varying tomes on the period and the man and science in general, I began to believe — and still do — might have been one of the greatest scientists in the history of our species.

My purpose in writing about this is not to debate with you whether he was or is great. That is both a debate beyond the scope of this post and beyond any contemporary research anyone reading this (or writing this) is likely to have done. It’s also one that, regardless of the outcome, won’t move the chain in human thinking one iota. So, please treat what I’m going to be saying about Roger Bacon and his role instead as both data point and a metaphor for what I’m want to talk about.

Continuing…

All this mulling led me to what I wrote years ago to understand my present behavior. It gave me a little more insight into the legacy that I am trying to leave. But it reminded me of something else, too. Something that perhaps we often forget in the course of our very lucky lives as people who have a shot at helping to transform the world we live in.

Here’s what that is:

The very hallmark of continued human social existence has been that each of us as a human, has an infinite capacity to create something that in some way, incrementally and on occasion profoundly impacts the continued existence of society and the human species. It’s happened frequently enough throughout history, with the right combinations of people and resources, to so far, ensure, at least for now, the continued existence and even flourishing of humanity and the cultures and society associated with it, with all their problems, glitches, denial of opportunities, errors of judgment and action, and even criminality. Despite the bad, we survive as a species and grow. Because the good always outweighs the bad, and over time, even if it doesn’t seem so, overcomes the bad. That tells you something — human beings, as a rule, are good, not evil, despite the cynics who would have you think otherwise. Complaining doesn’t solve problems — finding solutions to the problems solve them.

So, in the following paragraphs, please bear with me and take to heart if you can what Bacon and (if I’m not being too presumptuous) I say about invention, the human spirit, the art of science, and the abilities of every one of us as a human being to transcend and master the course of our own existence in a practical way, not just via some flight of fancy. As you read this post, Bacon’s speech is couched in religious terms (e.g. God the Creator, etc.,) — as well they should be, since he was a Franciscan friar in the 13th century. But the content and the principles should be taken in a secular light. 

My voice, on the other hand, is not religious at all. Here we go.

The 13th century: Cultural optimism drives the… horse

Throughout his entire adult life at the core of his very being, Roger Bacon (1214 to 1292) believed that human beings could not only master the laws of nature but to even change them through invention and creation. His approach, though, wasn’t something carved from fantasy but was rooted in a rational science that was both derived from universal principles and supported via experimentation. He’s often called the Father of Experimental Science (a lot more important than the Godfather of CRM). 

This idea, completely radical for the 13th century, is now self-evident to us: The verification of hypotheses through observation, experience, methodological rigor, and discovery. You might think, “Bad start, Paul. What’s the big deal about that?” For the 21st century, thanks to Bacon and his successors, it isn’t a big deal. It’s what especially contemporary scientists do. But in the 13th century, this was revolutionary and would, if it gained popular credence, overturn the bulk of so-called scientific approaches at the time. Science was based more on argument and natural philosophy, rather than a rigorous approach that used the actual practical testing of hypothesis to verify or deny the hypothesis.

To super-simplify (again, this post isn’t meant to be highly detailed on areas that I’m not an expert in, but I do know something of the era), the accepted approach to science when the 13th century began was debating and arguing the hypothesis, and the more “rational” argument would win. Experimental science changed that.

What made this so exciting was the 13th century was when civilization began to advance with these kinds of ideas in mind for the first time at scale. They were premised on a broad cultural optimism, which, when articulated, said that each one of us was capable of what was called special revelation — a creative spark that could generate a new idea that could impact the course of things up to and including all civilization. 

Though God may have gifted us with that creative spark, the idea that was generated was generated via an individual human being’s thinking process. The advocates of experimental science were effectively saying, “OK, we think that this idea has some merit. Let’s take it, test it, and see if it works and if the results have applicable value.” This concept germinated in a Europe that was undergoing what might have been a renaissance before the Renaissance we know in the 15th century. The flowering of the arts and the sciences and desire to discover were given the scale and, equally as good, funding, in places like the courts of Frederick II of Hohenstaufen, who was the Holy Roman Emperor, and his cousin, Alfonso X, also known as Alfonso el Sabio (the Wise) of Castile. These leaders weren’t just conquerors and heads of state; they were patrons who funded scientists, engineers, artists, philosophers, and others who were generating new ways of looking at the world and creating new tools and products that would make the world more productive. 

For example, Frederick II of Hohenstaufen was the sponsor at his court of Salerno, the leading medical school of the 13th century. Alfonso X, not only sponsored original experimentation and research but also was a hands-on researcher himself. His Libros de saber de Astronomia, which were astronomical tables that he and a research team he led compiled, were the industry-standard until Tycho Brae revised them in the 16th century. In conjunction with this research, scientists and engineers at his court invented a mechanical clock to measure time with more precision.

But the inventions weren’t just for a small group in a royal court. There were practical technologies that were created and applied to the larger world — and, were, in the context of their era, far more important to the continuation and evolution of the species that the things that we tend to call “disruptive” or “innovative” today. (On a whole other subject, we throw around the terms “disruptive” and “innovative” far too much for things that are neither). For example, during that time, the leather yoke, far more flexible than the yokes of the past, were used to drive horses, rather than the oxen used in the past to do agricultural work. The results were spectacular. Man, doing the agricultural work produced 45-foot pounds of work per second. Oxen with the rigid yoke produced 288-foot pounds per second; the horse with the flexible leather yoke produced 432-foot pounds per second and could work two hours longer than an ox — work efficiency increase of 65% in the fields.

The other agricultural breakthrough of the period was the widespread adoption of three-field crop rotation, a significant change from the centuries-old, two-field crop rotation. I won’t go into what this is in the interests of space, but if you are interested, check the short and sweet explanation given here. Suffice to say, it protected and even replenished nutrients in the soil rather than just drained them.

A third breakthrough, which added to this agricultural boom, was the introduction of hydraulic power via the waterwheel (which also had a huge industrial impact, too). For example, in Flanders, sandy marshland became fertile cropland, as hundreds of waterwheels irrigated thousands of acres.

The combination of these three breakthroughs, when applied to agriculture, led to grain yields increasing from an 11th century high of 2.5 measures per measure sown to four measures per measure sown, which amounts to a 100% increase in disposable foodstuffs. Talk about disruptive!  Many more human beings got to eat more healthily thanks to this technological revolution. I don’t mean to be disrespectful, but compare that to what I’ve heard called “disruptive” over the last few years. (Uber. Better taxis?  Groupon. Delivering discount coupons?  Not disruptive. Sorry.)

But it wasn’t just the breakthroughs themselves that characterized this era. The 13th century was also a period of self-revelation when the human species began to realize that it was special. It had the power to transform nature, not just react to it, as most animals do. It also celebrated that special capability.

Let me explain it another way.

I would imagine that many of us, given our somewhat privileged existences and the commoditization of international transportation, have been to Europe and seen in one place or other gothic cathedrals — and, if you have any sense of wonder, have been in awe at the size, complexity, and sheer magnificence of the creations. Many of these, the first groups of them, were built in the 13th century by cathedral builders who were often called master masons or architect engineers, with the express purpose of celebrating God and Creation. But one thing that may not be as obvious is that, in almost all these timeless magnificent buildings, if you look at them closely, man is placed at the apex of creation.

Man is central to the creation of the building and the celebration of God and “capital C” Creation. For example, the interior of the Cathedral at Reims was a maze that represented a holy pilgrimage to Jerusalem. When a visitor to the cathedral solves the maze, they arrive at the center of the cathedral. What that visitor finds at the center of these homages to God and Creation are not the names of any disciples nor Jesus or Mary, but rather the names of the master masons that built the cathedral. These architect-engineers saw themselves as central to the creation of this New Jerusalem, this new and refreshed world of invention, celebration, and abundance. God, in this central area, is portrayed on stained glass windows as an architect-engineer with a compass in his hands.

This spirit of creation and invention was infectious among at least a small group of people who had a significant impact on the health and well-being of the world that they lived in. Witness my man Roger Bacon’s inventive mind — and keep in mind this is the 13th century. This is a famous passage of his foresight that comes from his work Opus Tertium:

“Machines of navigation can be constructed without rowers, as great ships for river or ocean which are borne under the guidance of one man at a greater speed than if they were full of men. Also a chariot that can be constructed that will move at incalculable speed without any draught animals…also flying machines may be constructed so that man may sit in the midst of the machine turning a certain instrument by means of which wings artificially constructed would beat the air after the manner of a bird flying. Also a machine of small size may be made for raising and lowering weights of almost infinite amounts — a machine of the utmost utility.

Machines may be also made for going in sea or river down to the bed without bodily danger…and there are countless other things that can be constructed such as bridges over rivers without pillars or any such support.”

What’s utterly fascinating is, in his Letter Concerning the Marvelous Power of Art and Nature and the Nullity of Magic, Bacon claims to have seen all of them, except the flying machine, which, of course, shows up 300 years later in Da Vinci’s Notebooks. Is that the case? I don’t know, and I doubt anyone ever will. But what makes this incredible regardless is that even if he didn’t see them as a realized work, each of these imagined (or real) inventions has a practical purpose aimed at the betterment of the lot of humans on the planet at that time and in the future. In other words, he was applying a scientific method to providing practical invention (i.e. of real applicable value to the advance of society). Utility in the service of knowledge is essential. There has to be some actual purpose to the creation of knowledge and for its verification. It isn’t created for its own sake.

But, you might argue, what about brand new fresh ideas? Aren’t they sometimes valuable and yet unique and new so their application isn’t so apparent?

The answer to this is well put indirectly by the poet Charles Simic in a recent New York Review of Books article entitled The Prisoner of History:

“I live between two worlds, the one I see with my eyes open and the one I see with my eyes closed. Unlike other people, I regard the two as equals and trust my eyes as much as I trust my imagination.”

In other words, of course, we have to continually try to imagine the new, but it has to be in context — the context of the world as it is and as we imagine it to be with the realization of those new ideas. The key is “realization,” or the practical application of verified ideas to solving a problem or advancing something in the real world.

This could easily characterize Roger Bacon or any of the visionary thinkers of the 13th century. Or any of us, regardless of era, who want to make what we imagine we can do real — rather than just continue to imagine it. This is vision and imagination applied to real-world problems and needs.

All this — agricultural advances, cathedral building, Roger Bacon’s vision — reflected a broad cultural optimism. This optimism — a transformation of thinking about the place of humanity and individuals in the grandest scheme of all, the evolution of life, and the universe —  drove a 13th-century technological revolution that increased the capacity of the human species to grow more safely and to utilize its gift more actively in a way that was unmatched until the Renaissance.

Roger Bacon’s contribution to this was the creation and initial application of a scientific method to the evolution of science.

Roger Bacon

Roger Bacon labored through life as an almost heretical Franciscan friar, was persecuted by his own order, and died in 1292 at age 78 despite being imprisoned for a while by his own order. I’m not going to go into the politics of that or the life of Bacon per se, but instead, focus on what he said and saw. Because what he did reflects what each of us as an individual can do in his or her life. Personally, it affects how at least I’m thinking about what I might leave behind, even as I continue to concern myself, as most of us do, with my present and future. I think it’s important because I also think we underestimate exactly who we are and what we are capable of because we get caught up in the minutiae of our everyday existence. We often forget not just the nobility of our own capacity but the actual tools and practices that are there to effect those proficiencies and possibilities.

We are armed with:

  • The knowledge of what the human species is
  • Who we are as individuals
  • The existence of a philosophical framework that gives us some context to work within
  • The ability to reason
  • Tools and practices

Having all this makes us responsible at some point in our life to choose a way to use all this amazing potential and engage with the world to realize that potential and benefit more than just ourselves. We are each at different levels in our journey to figure this out and each at different degrees of commitment to trying to benefit others, rather than just intend to.

I’ve reached the age where, at least for me, I know what I want to do and how I want to do it and am beginning to consider what kind of memory it will leave when my time on Earth ends. It’s a bit frightening, to be honest, because I don’t want to consider it, but considering it I am.

In that consideration, Roger Bacon has been a paradigm for me, because of how he thinks about knowledge, science, and execution.

I’ll explain.

The first principle of knowledge in the mind of Bacon was virtue. That is, translated into 21st-century lingo, we have to be good human beings. Bacon understood that there is a clarity that goodness provides that allows one to understand truths — not necessarily or only big universal truths but scientific truths. A good person, because their intent is good, is prone to knowledge, because they are emotionally connected to doing good things. His way of putting it:

“For it is not possible that the soul should rest in the light of truth, while it is stained with sins…Virtue therefore clarifies the mind so that man may comprehend more easily not only moral but scientific truths.” — From the 1268 Opus Majus.

But it goes further than that. None of us, and, I fervently believe this, are devoid of the potential for creativity. We all have an infinite capacity to create and for applying that creativity in beneficial ways — what Bacon calls “special revelation” or what the incredibly underrated philosopher Philo Judaeus calls “a miniature heaven” in his On Creation of the Cosmos According to Moses.

In Bacon’s Opus Tertium:

“…Therefore, this way, which precedes special revelation is the wisdom of philosophy and this wisdom alone is in the power of man, yet supplemented by some divine enlightenment which in this part is common to all, because God is the intelligence active in all our souls in all cognition.”

Bacon is saying what I said: Each of us has that divine spark, the potential to create, but it also is a potential that has to be realized by the individual. God will not do it for you. God grants you the capability and the broad opportunity to act on it. You are responsible to realize your potential and then put it into action in a way that is beneficial. Again, Opus Tertium:

“As God wishes all men to be saved and no man to perish and his goodness is infinite, He always leaves some way possible for man through which he may be urged to seek his own salvation…For this reason, the goodness of God ordained that revelation should be given the world so that the human race may be saved. But this way, which precedes revelation, is given to man so that if he does not wish to follow it, nor seek a fuller truth, he may be justly damned at the end.”

Putting it simply, it would be a horrible waste of each of our lives if we don’t apply this gift in a way that benefits each of and all of us.

I know that this may be arcane for some of you (though, those that it is arcane for probably have stopped reading), waxing too philosophical for others, and maybe you think this is self-indulgent, which, admittedly, it might be. But, aside for what I’m wrestling with as I enter the last third, what Roger Bacon established in the 13th century — via his philosophical framework and the creation of experimental science and what the application of practical science led to with the technological breakthroughs and conjoined cultural optimism of the era — is one of the reasons we can continue to claim innovation and disruption and technological breakthrough and scientific achievement in the 21st century.

So, with that in mind, I’m going to go through one more thing (there is so much more that I’m leaving out) about Roger Bacon concerning experimental science, and then I will close this out with some of why this impacts me so much and how it has impacted what we all do so that we can see things in perspective or — in the context of the biggest picture — the continuation of the human species for the sake of its own growth.

Roger Bacon and the Integritas Sapientiae

Roger Bacon’s approach to experimental science was driven by a framework and a methodology grounded in a deeply rooted philosophy that was verifiable through research and testing — or disproven as such.

In Bacon’s case, the philosophy was defined by the concept of God as the Creator of all things in an orderly fashion. What that implied was that all things were related in some way via the laws that governed them. That meant that, while God created the heavens, Earth, man and woman, the trees, and the fruit that grew on them, all of which were different in visible ways and fashioned for different reasons — they were related to each by the universal laws of creation and the Creator and governed by those same laws.

To Bacon, this translated to eight definable branches of science; the laws of each of them were discoverable with a single, universal method. The sciences (for those of you interested) were:

  1. Common principles of natural sciences and philosophy
  2. Optics
  3. Astronomy
  4. Barology (the science of weight and its relation to gravity)
  5. Alchemy (actually chemistry not magic)
  6. Agriculture
  7. Medicine
  8. Experimental science

Bacon called these sciences the integrated sciences. Each of the eight had a unique position in the pantheon of science, but at the same time, all eight played a central role in the body of principle and practice that gave the human species hegemony over nature — the ability to alter it to their benefit.

What do I mean by this? (Hey, don’t shoot the messenger).

For example, to understand agriculture, you had to know botany, soil testing, animal husbandry, and horticulture. Your knowledge as an agricultural scientist had to span the interactions between climate, vegetation, and animal populations. This allowed you to figure out how to improve the conditions that would benefit organic life. Think about the example earlier of the leather yoke, horses, waterwheels, and three-crop rotation that disrupted all previous models for growing food and improved the lives of people everywhere by providing more food. This was a systematic, practical, applicable science.

Science had timely prudence too. For example, Bacon was a strong advocate of military research because of the imminent threat of the Mongol Empire and Genghis Khan — who he saw as the Anti-Christ. By 1241, they had reached the Danube in Europe, so researching war weapons became paramount. Based on both the Opus Majus and, another of his works, De Secretus Operibus, there is evidence that he invented gunpowder (according to arguments made by several scholars). He didn’t envision it as needed for bombs or bullets, per se, but he did see it as something that would defeat the Anti-Christ, Genghis Khan.

This great achievement, the beginnings of a rigorous method for experimental science, was first proposed in both his greatest work, Opus Majus (translates to Great or Big Work, ironically), and in his less well known, Communia Naturalium. While its true value wasn’t realized until the 17th century really, its seeds were planted in the 13th century’s cultural optimism.

There is so much more to Bacon’s experiments. There is some evidence that he invented a telescope and a compound microscope hundreds of years before the accepted dates of their invention. He did what can be seen as seminal work in optics and light and radiation. I could go on. But, to close this out, I want to focus on something I think even more important: His passion for finding the truth in things, in laws, in natural law, in the universe, and in life, and the lessons that I’ve learned at least in the search that he has helped guide me on for my life.

Roger Bacon, on truth and humility

Bacon understood that truth wasn’t only the property of the renowned — all humans were possessors of truth and thus deserved the respect of their peers. Without that humility, not only weren’t you acting like a human being, according to what God provided to you, but you also were denying yourself the opportunity for learning some of the truths that are being made available to you.

Look at these passages in Opus Majus:

  • “The wiser men are, the more humbly will they submit to learn from others; they do not disdain the simplicity of those who teach them.”
  • “Just as man’s conduct towards God is regulated by the reverence required, so is his conduct toward his neighbor regulated by justice and peace and his duty to himself by integrity of life.”
  • “It comes to pass that he who ceases to be a man by the loss of his goodness is turned into a beast.”

Lessons learned

So, what does all this rambling on about Roger Bacon mean, at least to me?  Let’s bring it in.

Since I found out about Roger Bacon and was drawn to him, he has been a guidepost for my life — a hero that framed much of what I’ve done with my life. He’s given me guidance in how to be what I hope is a good person, a practical foresighted thinker, and someone who will accomplish something of value on the planet to be remembered by.

Guideposts

  1. The universe is governed by a natural law that affects all things regardless of apparent differences.
  2. It is the continuous discovery of that universal natural law and how it works that drives and sustains the human species, whether or not it’s a conscious goal.
  3. Each human being on this planet, each of us, regardless of life’s station, has been granted an infinite capacity to create and is a possessor of truths that each of us can learn from. Titles and positions don’t matter.
  4. With that creative capacity, comes the responsibility to actively seek to use it practically to benefit others — either the species as a whole or groups or individuals. We are granted the gift; doing something with it is up to us.
  5. We each can gain more and greater knowledge if our purpose for gaining it is good (virtuous). That means that we are best served as people if we are driven to do good for more than ourselves.
  6. There is a rigorous method of going about applying that creative gift — proving what you think, regardless of where you apply it. Bacon’s desire to prove what he supposed and the method he developed can be appropriated by how we produce content, develop technology, and do anything else with our lives. For example, when I write, I am always able to defend what I say. I’ve counseled those of my younger brethren — who tend to be strong-willed and opinionated — that they can write whatever they want, but they need to be prepared to defend whatever they say, which, to be blunt, many times they can’t. You have to be able to show that what you say is defensible by its truths. If it isn’t or you can’t verify the argument, then don’t say it.

Our lives today were nourished in the wellsprings of prior centuries and the prior achievements of our forebears. On the one hand, we should respect that past, but we should never live in it. Because if there is one other thing we learned, it is that our wonderful species is constantly evolving and changing, and we have to both drive that change for the species to continue and respect that change as it occurs.

Roger Bacon was a major influence in teaching me all these things. I don’t know if any of this resonated with you. I don’t know if this was an exercise in sheer self-indulgence. I do know that the impact on me has been to make me think about what I do and who it impacts, act in a fashion that supports reason and truth as best as the flawed creature I am knows how, and at the same time, provides me with a moral compass. I hope that when I finally go, my epitaph does not read, “He was No. 1 in CRM” or “He was the Godfather of CRM,” but instead, it says, “He was a good person” or “He did good.”  Then, I’ve fulfilled my life’s purpose and what has been my dream and direction for many years.


News

  1. Let’s not let the hopeful news get Lost — post No. 3 — will be up later this week. But it will be up.
  2. If you are interested in joining the hit event The CRM Playaz Present: Playaz Place Bar and Not Grill Happy Hour any time in the next 38 weeks, here is a link to register. Warning: We are sold out (don’t worry its a free ticket) for April 15 and April 22 and selling out (almost gone) for April 29. There are some seats taken through May 13, but all May dates still remain. If you are interested, the Happy Hour is 3:30pm ET every Wednesday. Bring a glass of a drinkable liquid with you. You will be asked about it. 
  3. Also, every Thursday at 3pm ET, The CRM Playaz will do our regular show on industry doings. This week, we have Bob Stutz, President of SAP CX and one of the CRM industry’s great pioneers.
  4. At 3pm ET on Friday, we solve your problem of missing sports with our new how CRM Playaz: Sports Edition – Excuse the Intrusion. Watch for announcements.

Coronavirus Updates

Let’s block ads! (Why?)

ZDNet | crm RSS

Read More

Uber’s AI plays text-based games like a human

January 28, 2020   Big Data

Can AI learn to play text-based games like a human? That’s the question applied scientists at Uber’s AI research division set out to answer in a recent study. Their exploration and imitation-learning-based system — which builds upon an earlier framework called Go-Explore — taps policies to solve a game by following paths (or trajectories) with high rewards.

“Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text. These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents,” wrote the coauthors of a paper describing the work. “Moreover, they provide a learning environment in which these skills can be acquired through interactions with an environment rather than using fixed corpora … [That’s why] existing methods for solving text-based games are limited to games that are either very simple or have an action space restricted to a predetermined set of admissible actions.”

As the researchers explain, a challenge in developing text-game-playing AI is contending with large action spaces (i.e., the range of decisions facing a player). With a vocabulary of 20,00 words and the possibility of producing sentences with at most 7 words, for example, the total number of actions is a whopping 1.28e^30.

The modified Go-Explore, then, maps observations to actions while keeping track of under-explored areas in the game space. In the first of two phases — the “exploration” phase — Go-Explore explores the environment and records visited places to archival “cells.” These cells contain sets of observations mapped to the same representation by some mathematical function, and each  is associated with metadata including the trajectory towards that cell, the length of that trajectory, and the cumulative reward of that trajectory.

 Uber’s AI plays text based games like a human

In every game session, Go-Explore selects a cell based on its metadata and starts to randomly explore from the end of the trajectory associated with the cell. This is the beginning of phase two — the “robustification” phase — the rest of which involves training a policy using the trajectories in phase one. The goal here is to turn a “fragile” sequence of actions into a policy that can be applied across different games, or even one that can generalize to unseen games.

In a series of experiments, the researchers set Go-Explore loose in two games where multiple words are required to win and the reward is particularly sparse (i.e., actions where feedback isn’t available). The first was CoinCollector, a class of text-based games where the objective is to find and collect a coin from a location given a set of rooms, and the second was CookingWorld, a collection of over 4,440 games with 222 different difficulty levels and 20 games per level of difficulty (each with different entities and maps). While CoinCollector only parses five commands in total, CookingWorld accepts 18 verbs and 51 entities with predefined grammar with a total vocabulary size of 20,000, and it requires many actions (at least 30 on hard games) to find a reward.

For CookingWorld, the team devised three different scenarios in total: Single, where one agent was trained and tested for one game; joint, where a single policy was trained and tested on all 4,440 games; and zero-shot, where games were split into training, validation, and test sets and the policy was trained on the training games and tested on unseen test games. In all games, including CoinCollector, the maximum number of steps was set to 50 for simplicity’s sake.

The team reports this flavor of Go-Explore found an optimal strategy in CoinCollector with approximately half the actions compared with the previous state-of-the-art system, and with a trajectory
length of 30 steps compared with the previous best average of 38. In CookingWorld, Go-Explore attained a total score of 19,530 over games (close to the maximum score of 19,882) with 47,562 steps and found a winning trajectory in 4,279 out of the total of 4,440 games.

It’s not a flawless approach by any stretch, the researchers note. There’s a large overlap in the descriptions of games, leading to a situation where a policy receives similar observations but is expected to take two different actions. And Go-Explore would have a hard time finding good trajectories in games with larger action sizes, like Zork I. That said, the team believes their modified Go-Explore system shows “promising results” in the text game arena.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Will AI Force Humans To Become More Human? (Part 1)

January 21, 2020   BI News and Info

Part 1 of a 2-part series exploring the intersection between AI and humanity

Will artificial intelligence (AI) create an environment where design thinking skills are more valuable than data science skills? Will AI alter how we define human intelligence?

That sounds like questions one might expect from an episode of Rod Serling’s TV series Twilight Zone. Instead of AI replacing humans, will AI actually make humans more human? Will characteristics such as empathy, compassion, and collaboration actually become the future high-value skills that are cherished by leading organizations?

Let’s explore, but we need to start with some definitions.

AI, AI rational agents, and the AI utility function, oh my!

AI is defined as the simulation of human intelligence. AI relies upon the creation of “AI rational agents” that interact with the environment to learn, where learning or intelligence is guided by the definition of the rewards associated with actions. AI leverages deep learning, machine learning, and/or reinforcement learning to guide the “AI rational agent” to learn from continuous engagement with its environment to create the intelligence necessary to maximize current and future rewards (see Figure 1).

Jan 21 blog fig 1 Will AI Force Humans To Become More Human? (Part 1)

Figure 1: AI Rational Agent

The rewards the AI rational agents seek to maximize are framed by the definition of “value” as defined in the AI utility function – the objective criterion that measures the progress and success of an AI rational agent’s behaviors. To ensure the creation of an “AI rational agent” that exhibits the necessary intelligence to make the “right” decision, the AI utility function must cover a holistic definition of “value” that includes financial, operational, customer, societal, environmental, and spiritual (see Figure 2).

Jan 21 blog fig 2 Will AI Force Humans To Become More Human? (Part 1)

Figure 2: “Why Utility Determination Is Critical to Defining AI Success”

To summarize, AI is driven by AI rational agents that seek to drive “intelligent” actions based upon “value” as defined by the AI utility function. To design a holistic AI utility function that drives “intelligence” (whether artificial intelligence or human intelligence), we need to start by defining, or redefining, what we mean by “intelligence.”

Redefining intelligence

Intelligence is defined as the ability to acquire and apply knowledge and skills. 

Our U.S. educational institutions have created numerous tests (Iowa Basic Skills, ACT, SAT, GMAT) to measure one’s “intelligence.” Yet there are many stories demonstrating that the education system’s need to put people into “intelligence boxes” has actually stifled creativity. (For two such stories, see the famous podcast by Sir Ken Robinson, “How Do Schools Kill Creativity?” and the story of Gillian Lynne, famous for changing the world of dance and choreography through musicals such as Cats and Phantom of the Opera.) Anyone with children knows the horror of this dilemma as they panic to prepare for ACT and SAT tests that play an outsize role in deciding their future.

This archaic definition of “intelligence” actually has the exact opposite impact, in that it reduces students (our children) to rote learning machines, driving out the creativity and innovation skills that differentiate us from machines.

We already have experienced machines taking over some of the original components of intelligence. How many people use long division, or manually calculate the square root, or multiply numbers with more than two digits in their heads? Traditional measures of intelligence are already under assault by machines.

And AI is going to make further inroads into what we have traditionally defined as intelligence. Human intelligence will no longer be defined by one’s ability to reduce inventory costs, or improve operational uptime, or detect cancer, or prevent unplanned maintenance, or flag at-risk patients and students. Those are all tasks at which AI models will excel. There’s no human competitive advantage there anymore.

We must focus on nurturing the creativity and innovation skills that distinctly make us humans and differentiate us from analytical. We need a new definition of intelligence that nurtures uniquely human creativity and innovation capabilities (said by the new chief innovation officer at Hitachi Vantara, wink, wink).

Part 2 of this series will explore further the human skills that make us unique, the concept of design thinking as it relates to innovation, and answer the question, “Will AI actually make humans more human?”

This article originally appeared on LinkedIn and is republished by permission.

As technology embeds deeper into our lives, companies need to elevate the role of people and culture.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Chatbot or Human? Combine Both to Achieve Customer Service Success

January 8, 2020   CRM News and Info

Companies across industries are shifting toward an automated approach to customer relations. With the help of chatbots, businesses
can reduce their customer service costs by up to 30 percent and save both employees’ and customers’ valuable time.

In fact, many customers welcome chatbots in the customer service process.

So, if chatbots offer all of these advantages, why not scrap your customer service team altogether and automate everything?

Not so fast. While chatbots certainly have their benefits for customer service, they are still a long way from being sophisticated enough to deal with all client interactions — and many customers still prefer speaking to an actual human.

In fact, an increasing number of consumers
prefer a combination of automation and human-led interactions. Both of these approaches can help companies reach the top of their customer service game.

First, what actually are chatbots, and how is AI advancing them?

Chatbots Have a Lot to Offer

Chatbots are applications that interact with humans via text through various platforms and for various purposes. These days, businesses primarily use chatbots to facilitate online ordering, make product suggestions, offer personal finance assistance, maintain schedules, and of course — provide customer support.

The chatbot industry is booming, with market value estimated
to reach $ 10.08 billion by 2026. From startups to enterprises, businesses across the spectrum are realizing the value of chatbots for driving efficiency and their bottom lines.

In fact, 80 percent of business decision-makers planned to start using a chatbot in some way or another by 2020, an Oracle survey found.

So, how are chatbots proving to be so beneficial?

When used correctly, customer service chatbots can save companies serious amounts of time and money. A well-developed chatbot can improve response time and customer satisfaction, and help retain vital customers.

Being a technology, and not a human representative, chatbots have infinitely more scaling potential. Think of it like this: A human-based customer service interaction is one to one, between live agent and customer. A chatbot has the ability to interact with hundreds or thousands of people at the same time, who never will experience a drop in quality or response time.

Just to give an indication of how much time can be saved by chatbots: In an average 6-minute customer service call, 75 percent of the time is spent
doing manual research, while only 25 percent is devoted to customer interaction. Not to mention, the wait time customers have to spend to get through to an agent in the first place is non-existent with chatbots.

This not only saves companies the resources they’d otherwise channel into employing human agents, but also makes their business contactable 24/7. Chatbots for customer service
will help businesses save an astounding total of US$ 8 billion per year by 2022, research from IBM indicates.

However, it’s important to note that chatbots can vary wildly in their capabilities, and companies looking for top-of-the-range tech should be prioritizing the inclusion of technologies like artificial intelligence (AI) and natural language processing (NLP). Let’s take a look at what chatbots need to include to please customers rather than push them away.

All Chatbots Are Not Created Equal

When looking to leverage chatbots to drive efficiency in customer service, businesses often deploy rudimentary, path-based bots that are unable to engage in natural-sounding conversation and deliver accurate responses.

Using such chatbots can be risky. Seventy-three percent of users recently surveyed
would not return to a chatbot they deemed unhelpful the first time they used it. Bad chatbots could end up driving your customers away.

So, what makes good chatbots good?

Advanced AI and NLP technologies ensure that customer-chatbot interactions are more informative and fluid. Chatbots embedded with these functionalities can understand the context around requests, engage in free-flowing conversation, and learn from the data they collect over time. This means that in many cases, customers can get meaningful and relevant answers in a matter of milliseconds.

When should businesses apply chatbot technology, and when they should opt for human-led interactions?

When to Use Chatbots

As they currently stand, the majority of chatbots are great for answering simple requests and minor troubleshooting. In fact, many customers prefer to troubleshoot on their own before talking to a live agent, which is when chatbots can be incredibly useful.

For example, basic chatbots can answer high-frequency questions on things like opening hours and location, which don’t require human intervention to explain. Businesses that experience many of these types of low-value queries would benefit greatly from chatbots.

A top-of-the-line chatbot that includes AI and NLP can respond to customer requests using conversational language or handle more complex queries involving specific products.

Chatbots are a great option for when customer service representatives are offline or overloaded with work. Chatbots can deal with multiple requests at a time, and never need time off. You can utilize them as a tool that allows your customers to contact your business at any time.

When Human-Led Customer Service Is Best

Despite the time and money-saving potential of chatbots for businesses, the reality is that many people still want to speak to an actual person, especially when the issue is complicated. Customers found that live chats with humans consistently outperformed chatbots for a range of reasons, including great customer experience and ease of communication, according to this 2019
report.

Evidently, people are still wary that a chatbot won’t be able to resolve their query from start to finish. It pays to make sure your customers have access to a live agent when a complex or technical request requires human judgment and intuition.

Most chatbots are unable to detect when someone is becoming irate. They cannot manage delicate situations, or determine when to offer a discount or upgrade to dissatisfied customers in the same way a human can.

Customers often spend time attempting to resolve an issue with a chatbot, only to be transferred to a live agent after having failed to get an answer from the bot, at which point they already are frustrated. In cases like these, it’s best to have an agent on the case from the beginning.

People generally had good experiences with chatbots — up to a point, based on one
recent poll. Of the people surveyed, 51 percent said they were open to using a chatbot when the issue was a simple one, whereas with more complex problems, 57 percent said they would prefer to wait 10 minutes to get help from a human.

Human customer service representatives are better at identifying upselling opportunities and giving a conversation a personal touch that could make customers more receptive to additional purchases. While AI chatbot technology is developing fast, it’s still a long way from being able to match the nuances of human-led conversations.

Generational Preferences

While it’s clear that consumers have distinct preferences when it comes to chatbot or human-led interactions, there is some variation across demographics on these choices. Businesses should take this into account and look at their customer segments. Are their customers likely to feel more comfortable with a human on the line, or are they open to chatbots?

Unsurprisingly, millennials and generation Z are leading the way in chatbot usage, with the number of consumers that prefer a live agent declining with each older generation. Younger generations, however, are more demanding when it comes to a seamless digital experience switching between channels. This means that if you’re offering a combined chatbot and human agent customer service experience, the transition between the two should be easy and straightforward.

Better Together

Ultimately, you shouldn’t feel that you have to choose between chatbots and humans in your customer service strategy. If you are considering using a chatbot, make sure you first have identified the problem area it will help to address. Will it be dealing with the simple queries your agents often receive? Or does it need to have specialized knowledge on a certain product?

Once you have the goals and purposes of your chatbot clear, you can ensure that your live agents spend their time efficiently and use the technology to complement their own tasks. With the time and money-saving promise of chatbots, along with live agents’ guarantee of a great customer experience, you’ll be well on your way to customer service success.
end enn Chatbot or Human? Combine Both to Achieve Customer Service Success


Jay%20Reeder Chatbot or Human? Combine Both to Achieve Customer Service Success
Jay Reeder is CEO of
VoiceNation.

Let’s block ads! (Why?)

CRM Buyer

Read More

How The Future Works: Why your ultimate job is to be HUMAN. A…

January 7, 2020   Big Data

[unable to retrieve full-text content]



How The Future Works: Why your ultimate job is to be HUMAN. A film by Fu…

Privacy, Big Data, Human Futures by Gerd Leonhard

Read More

Tencent details how its MOBA-playing AI system beats 99.81% of human opponents

December 25, 2019   Big Data

In August, Tencent announced it had developed an AI system capable of defeating teams of pros in a five-on-five match in Honor of Kings (or Arena of Valor, depending on the region). This was a noteworthy achievement — Honor of Kings occupies the video game subgenre known as multiplayer online battle arena games (MOBAs), which are incomplete information games in the sense that players are unaware of the actions other players choose. The endgame, then, isn’t merely AI that achieves Honor of Kings superhero performance, but insights that might be used to develop systems capable of solving some of society’s toughest challenges.

A paper published this week peels back the layers of Tencent’s technique, which the coauthors describe as “highly scalable.” They claim its novel strategies enable it to explore the game map “efficiently,” with an actor-critic architecture that self-improves over time.

As the researchers point out, real-time strategy games like Honor of Kings require highly complex action control compared with traditional board games and Atari games. Their environments also tend to be more complicated (Honor of Kings has 10^600 possible states and and 10^18,000 possible actions) and the objectives more complex on the whole. Agents must not only learn to plan, attack, and defend but also to control skill combos, induce, and deceive opponents, all while contending with hazards like creeps and fully automated turrets.

Tencent’s architecture consists of four modules: Reinforcement Learning (RL) Learner, Artificial Intelligence (AI) Server, Dispatch Module, and Memory Pool.

The AI Server — which runs on a single processor core, thanks to some clever compression — dictates how the AI model interacts with objects in the game environment. It generates episodes via self-play, and, based on the features it extracts from the game state, it predicts players’ actions and forwards them to the game core for execution. The game core then returns the next state and the corresponding reward value, or the value that spurs the model toward certain Honor of Kings goals.

 Tencent details how its MOBA playing AI system beats 99.81% of human opponents

As for the Dispatch Module, it’s bundled with several AI Servers on the same machine, and it collects data samples consisting of rewards, features, action probabilities, and more before compressing and sending them to Memory Pools. The Memory Pool — which is also a server — supports samples of various lengths and data sampling based on the generated time, and it implements a circular queue structure that performs storage operations in a data-efficient fashion.

Lastly, the Reinforcement Learner, a distributed training environment, accelerates policy updates with the aforementioned actor-critic approach. Multiple Reinforcement Learners fetch data in parallel from Memory Pools, with which they communicate using shared memory. One mechanism (target attention) helps with enemy target selection, while another —  long short-term memory (LSTM), an algorithm capable of learning long-term dependencies — teaches hero players skill combos critical to inflicting “severe” damage.

The Tencent researchers’ system encodes image features and game state information such that each unit and enemy target is represented numerically. An action mask cleverly incorporates prior knowledge of experienced human players, preventing the AI from attempting to traverse physically “forbidden” areas of game maps (like challenging terrain).

In experiments, the paper’s coauthors ran the framework across a total of 600,000 cores and 1,064 graphics cards (a mixture of Nvidia Tesla P40s and Nvidia V100s), which crunched 16,000 features containing unconcealed unit attributes and game information. Training one hero required 48 graphics cards and 18,000 processor cores at a speed of about 80,000 samples per second per card. And collectively for every day of training, the system accumulated the equivalent of 500 years of human experience.

 Tencent details how its MOBA playing AI system beats 99.81% of human opponents

The AI’s Elo score, derived from a system for calculating the relative skill levels of players in zero-sum games, unsurprisingly increased steadily with training, the coauthors note. It became relatively stable within 80 hours, according to the researchers, and within just 30 hours it began to defeat the top 1% of human Honor of Kings players.

The system executes actions via the AI model every 133 milliseconds, or about the response time of a top amateur player. Five professional players — “QGhappy.Hurt,” “WE.762,” “TS.NuanYang,” “QGhappy.Fly,” and “eStarPro.Ca,” — were invited to play against it, as well as a “diversity” of players attending the ChinaJoy 2019 conference in Shanghai between August 2 and August 5.

The researchers note that despite eStarPro.Cat’s prowess with mage-type heroes, the AI achieved five kills per game and was killed only 1.33 times per game on average. In public matches, its win rate was 99.81% over 2,100 matches, and five of the eight AI-controlled heroes managed a 100% win rate.

They’re far from the only ones whose AI beat human players — DeepMind’s AlphaStar beat 99.8% of human StarCraft 2 players, while OpenAI Five’s OpenAI Five framework defeated a professional team twice in public matches.

The Tencent researchers say that they plan to make both their framework and algorithms open source in the near future, toward the goal of fostering research on complex games like Honor of Kings.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

GRC And Intelligent Finance: How To Get The Human Factor Right

December 4, 2019   SAP
 GRC And Intelligent Finance: How To Get The Human Factor Right

Part 4 of the “Finance Transformation” series that explores how finance can take the lead in driving their companies towards an intelligent enterprise

By definition, automation means removing the human factor from a given process. This is as true for finance as it is for manufacturing or any other line of business.

But the truth is, the human factor can never be removed entirely – particularly for finance processes where governance, risk, and compliance (GRC) is non-negotiable. Visibility into and oversight over automated processes is a requirement. We are in control, not the robots, and we must maintain appropriate oversight of automated processes.

The question is this: how much of the human factor can be legitimately removed from processes while still maintaining control? Managing payments is a good example. As finance organizations have long known, instead of paying people to check every invoice against every purchase, technology can be used to automate the process of matching purchase orders (POs), goods received, and invoices. This is known as the three-way match.

In this instance, you might maintain the human factor in the form of an auditor. This auditor comes in periodically and pulls, say, 25 transactions at random – examining them to make sure there is no sign of error, fraud, or noncompliance to policy. If the auditor finds an issue, the next step is to pull additional transactions to ascertain the extent of possible anomalies and exceptions and finally recommend remediation steps.

Most certainly, this approach provides some level of assurance and is a compromise between auditing every transaction and helping ensure adherence to policy based on sampling. In the end, however, this is primarily manual in nature and does not provide full coverage of transactions.

Toward full automation

A better way forward is full automation with real-time management by exception. The idea here is to pull humans into the process only as needed. Any process – procure-to-pay, order-to-cash, treasury management, billing and credit, and much more – can be fully automated with proper real-time monitoring to alert process managers of outlier events.

What’s more, whereas the auditing approach involves a statistical sampling of transactions, real-time management by exception means that your system monitors 100% of transactions, configurations, and relevant master data in the here and now.

Getting the human factor just right with full automation, however, requires a different approach to financial process management. You might find it helpful to think in terms of intelligent access, intelligent controls, and intelligent detection.

Intelligent access

Automation and hybrid landscapes can complicate access to financial systems. In the past, you needed to authenticate individuals – or not – based on credentials provided. Now, to facilitate end-to-end automated processes, you need to provide access across landscapes, and in some cases, to robots, as well.

To manage the access risks, you’ll need to manage digital identities across systems as well as be able to provide access capabilities to authenticate robotic identities. A good practice is to define the incoming machine in terms of a defined role that allows the person or machine sufficient permissions to perform the needed business function. And from an audit perspective, every transaction and user requires monitoring. This means that every action taken by users and robotic processes must be logged, producing an audit trail and an alerting system to detect anomalies and potentially malicious activities.

Technology such as machine learning (ML) can help. By reading process data in real time, ML algorithms can detect security and transactional anomalies at the application layer and alert process managers. ML can also be used to intelligently optimize role definition, which can then be assigned dynamically in a secure and traceable manner.

Intelligent controls

Fully automated financial processes are controlled primarily through configuration, master data, and transaction monitoring.

Configuration settings are key to establishing and maintaining processes that are aligned to policy. To optimize processes, leading organizations are adding continuous control monitoring to provide a feedback loop on how these settings can be monitored. Take, for example, a setting that alerts a process owner to a change in the thresholds assigned to a three-way match before manual intervention is required. Or consider monitoring depreciation calculations tied to automated postings, changes to charts of account tables, or modifications to posting and reconciliation rules associated with accounting periods and the financial close processes. By replacing manual controls with fully automated controls, process owners and auditors can gain greater visibility and trust in processes, including robotic processes, managed by core ERP systems.

Proper master data monitoring is also critical to help prevent policy violations or potential fraud. Fields that contain sensitive data can be monitored to help ensure accuracy and completeness, as well as for changes that might be motivated by policy aversion such as one-off transactions. Master data that can be monitored can be found in key fields in vendor or customer master accounts, including bank account information, or fields related to key information in POs or invoices, or conversion values used in various calculations. And in today’s world, which requires greater data protection and privacy, the ability to also mask or log data access to sensitive information is also needed.

Finally, transaction monitoring provides another layer to help identify unexpected outcomes in core processes. Although effective access management, configuration, and master data monitoring are all important in a detective and preventive approach, transaction monitoring adds an important final check to help identify where these prior approaches might not be yielding the expected results and adjustments might be in order. And by receiving alerts, a process owner can also have the ability to drill down quickly into the transaction details residing in back-end systems. This is enabled now more than ever, as SAP S/4HANA allows for tighter integration of core processes that help provide a more consolidated view of enterprise business data that serves as a single source of finance truth.

Intelligent detection

Constant monitoring of finance processes is required to detect intrusions and potentially fraudulent activity. Proper detection is critical at the point of system access – but it doesn’t stop there.

On a 24×7 basis, your systems must be monitored to detect both external and internal threats. Beyond traditional cybersecurity and monitoring approaches, many companies are now turning to ML algorithms as the risk landscape grows in complexity.

You can stay on the offensive by using ML to analyze and correlate logs from past and current security events. Based on this historic data, ML can help you run forensic investigations that uncover new attack patterns before they impact your systems. It can also help you detect patterns of system activities in real time to generate alerts to anomalous conditions that might require immediate attention.

ML can also be used to map structured and unstructured data across systems and highlight personal information, which is relevant under various data-privacy regulatory pronouncements.

The advantages of intelligence

With intelligent access, intelligent controls, and intelligent detection, organizations can get the human factor just right for automated finance processes. The outcomes, though, are not merely a new set of requirements for managing finance risk, but tangible business benefits that serve the business well and provide greater assurance to process owners and stakeholders alike. These include trusted processes that drive business performance, expected results that instill confidence, and streamlined audits that drive down the costs of compliance.

Join the second SAP Intelligent Finance virtual event on Tuesday, Feb. 11, 2020, and explore the new reality driving finance transformation. Register now.

Follow SAP Finance online: @SAPFinance (Twitter) | LinkedIn | Facebook | YouTube

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Human Resources In A High-Tech World: The Challenges For CHROs

August 3, 2019   SAP
 Human Resources In A High Tech World: The Challenges For CHROs

It’s a well-worn cliché that people are the business’s most valuable asset. But just because it’s a tired phrase doesn’t mean it’s not true – especially in the world of high tech. In fact, the highly skilled, highly creative talent that tech companies now need to succeed is getting harder to find, and there’s a war on to attract and retain these prized workers.

By one estimate, yearly demand for data scientists, data developers, and data engineers will translate into roughly 700,000 job openings by 2020 – and the number of jobs for all US data professionals will increase by 364,000 openings to reach a total of 2,720,000. In other words, there are a lot of jobs but not a lot of qualified candidates to fill them.

Part of the problem is that, due to the push toward digital transformation, virtually every industry is now a high-tech industry. Every process is a target for automation and digitization. Every enterprise is now a data-driven enterprise. High-tech stalwarts like Google, Netflix, and Amazon aren’t just competing with one another and a host of Silicon Valley startups for the talent they need. They’re also competing with automotive companies, Wall Street, retailers, banks, shipping companies, healthcare, and insurance providers …  the list is endless.

Another factor in the growing demand for high-tech talent is that these workers tend not to stay at one job for more than two or three years. Turnaround is high. Opportunities are abundant. It can be as difficult to keep the right people as it is to find them.

It’s no wonder that the CHROs at the high-tech companies I work with consistently rank access to talent as their make-or-break challenge. And it’s creating new expectations for what it means to be an effective HR leader. Today’s high-tech CHROs are expected to:

  • Help define enterprise strategy: If talent is the catalyst for enterprise success, HR leaders have to help define their organization’s go-to-market strategy. They not only need to know the talent demands that will result from a strategic change; they also need to provide their C-suite colleagues with insight into the skillsets of the existing workforce and the availability of new talent that will turn strategy into reality.
  • Think more like a COO: More than ever, CHROs need to stay plugged into all aspects of the business’ ongoing operations. They need insight into the talent needs of every department, division, and location. That’s why we’re seeing a growing number of CHROs appointed from the operations side of the business, rather than from within HR.
  • Become more analytic and data-driven: HR is still a human-centric function, but it’s relying more on advanced analytics and a wealth of data to help win the war for talent. High-tech companies can’t afford to make hiring decisions based on gut feelings and best guesses. They need the reliability that comes with data-driven decisions.
  • Accelerate HR cycles and processes: Today’s good candidates have too many options. A prolonged vetting and hiring process gives them time to consider other opportunities. To outmaneuver other companies competing for the same talent, it’s good to be first in line with an offer.

Finally, while it’s always hard to find great talent, it can be even harder to keep it. CHROs already know this, and they’re committed to providing a great employee experience that helps tip the scales in their organization’s favor. A rich and rewarding work environment is the primary way a company proves that it’s not just peddling marketing fluff when it says its employees are its most valuable asset.

High-tech companies that can’t give their employees a compelling experience will always struggle to provide one to their customers. Culture matters – and doing the little things right makes it easier to master the big things.

Want to find out how to build, integrate, deploy, and operate an intelligent application with SAP Data Intelligence? Join us on September 17 and get to know the capabilities of SAP Data Intelligence.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Teradata Appoints Kathy Cullen-Cote as Chief Human Resources Officer

July 31, 2019   BI News and Info

Brings 30 years of recognized leadership in workforce planning, talent management and employee engagement

Kathy Cullen Cote 280x400 Teradata Appoints Kathy Cullen Cote as Chief Human Resources OfficerTeradata (NYSE: TDC), the industry’s only Pervasive Data Intelligence company, today announced that it has appointed Kathy Cullen-Cote as Chief Human Resources Officer (CHRO), effective immediately. Cullen-Cote will lead Teradata Human Resources (HR), including workforce planning, talent management, learning and development, and employee experience. Laura Nyquist, who previously held dual roles of General Counsel and CHRO, will continue in her role as General Counsel.

“With more than 30 years of experience in all facets of human resources, Kathy is a skilled leader with a talent for building a robust culture of employee engagement, enhancing the employee experience, and creating uniquely captivating training programs,” said Oliver Ratzesberger, President and CEO at Teradata. “I am thankful for her commitment to diversity and inclusion, and look forward to Kathy’s success in advancing the skills, motivation and connections throughout our organization. As Teradata continues its business transformation, we will also benefit from Kathy’s experience in thoughtfully guiding the cultural evolution necessary to support transformative growth.”

“I am proud to join Teradata and look forward to supporting and further inspiring its vibrant culture,” said Kathy Cullen-Cote, Chief Human Resources Officer at Teradata. “With incredible employees that consistently apply their collective talent, time and expertise to help Teradata customers find answers to their toughest challenges, it’s easy to see that Teradata offers solutions and services that customers simply cannot find anywhere else. I believe in the power and strength of an organization that values diversity and inclusion and look forward to helping Teradata exceed its customers’ expectations.”

About Kathy Cullen-Cote
Kathy Cullen-Cote joins Teradata from PTC, a Boston-based software company, where she was serving as EVP and Chief Human Resources Officer. At PTC, Cullen-Cote served in HR roles of increasing responsibility as she guided the organization’s growth through cultural transformation programs, global employee engagement initiatives, and the implementation and adoption of cutting-edge HR systems.  Prior to PTC, she served in HR leadership roles at Johnson and Johnson, Raytheon, Imark Communications and Barry Controls. Cullen-Cote recently was awarded the 2018 HR Leadership Forum Bob Gatti HR Leadership Excellence Award. Cullen-Cote will be based in Teradata’s San Diego headquarters.

Let’s block ads! (Why?)

Teradata United States

Read More
« Older posts
  • Recent Posts

    • OUR MAGNIFICENT UNIVERSE
    • What to Avoid When Creating an Intranet
    • Is Your Business Ready for the New Generation of Analytics?
    • Contest for control over the semantic layer for analytics begins in earnest
    • Switch from Old Record View to Kanban Board View to Maximize Business Productivity within Dynamics 365 CRM / PowerApps
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited