• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: NextGeneration

Neuralink demonstrates its next-generation brain-machine interface

August 29, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

During a conference streamed online from Neuralink’s headquarters in San Francisco, scientists at the Elon Musk-backed company gave a progress update. It came just over a year after Neuralink, which was founded in 2016 with the goal of creating brain-machine interfaces, first revealed to the world its vision, software, and implantable hardware platform. Little of what was discussed today was surprising or necessarily unanticipated, but it provided assurances the pandemic hasn’t prevented Neuralink from inching toward its goals.

Neuralink’s prototype can extract real-time information from many neurons at once, Musk reiterated during the stream. In a live demo, readings from a pig’s brain were shown onscreen. When the pig touched an object with its snout, neurons captured by Neuralink’s technology (which had been embedded in the pig’s brain two months prior) fired in a visualization on a television monitor. That isn’t novel in and of itself — Kernel and Paradromics are among the many outfits developing under-skull brain-reading chips — but Neuralink uniquely leverages flexible cellophane-like conductive wires inserted into tissue using a “sewing machine” surgical robot. Musk says it received a Breakthrough Device designation in July and that Neuralink is working with the U.S. Food and Drug Administration (FDA) on a future clinical trial with people suffering from paraplegia.

Founding Neuralink members from the University of California, San Francisco Tim Hanson and Philip Sabes along with University of California, Berkeley professor Michel Maharbiz pioneered the technology, and the version demonstrated today is an improvement over what was shown last year. Musk calls it “V2,” and he’s confident it will someday take less than an hour without general anesthesia to embed within a human brain. He also says it will be easy to remove and leave no lasting damage, should a patient wish to upgrade or discard Neuralink’s interface.

V2

Neuralink collaborated with Woke Studios, a creative design consultancy based in San Francisco, on the design of the sewing machine. Woke began working with Neuralink over a year ago on a behind-the-ear concept that Neuralink presented in 2019, and the two companies re-engaged shortly after for the surgical robot.

Woke head designer Afshin Mehin told VentureBeat via email that the machine is capable of seeing the entirety of the brain.

 Neuralink demonstrates its next generation brain machine interface

“The design process was a close collaboration between our design team at Woke Studios, the technologists at Neuralink, and prestigious surgical consultants who could advise on the procedure itself,” Mehin said. “Our role specifically was to take the existing technology that can perform the procedure, and hold that against the advice from our medical advisors as well as medical standards for this type of equipment, in order to create a non-intimidating robot that could perform the brain implantation.”

The machine consists of three parts. There’s a “head,” which houses automated surgical tools and brain-scanning cameras and sensors, against which a patient situates their skull. A device first removes a portion of skull to be put back into place post-op. Then, computer vision algorithms guide a needle containing 5-micron-thick bundles of wires and insulation 6 millimeters into the brain, avoiding blood vessels. (Neuralink says the machine is technically capable of drilling to arbitrary lengths.) The wires — which measure a quarter of the diameter of a human hair (4 to 6 μm) — link to a series of electrodes at different locations and depths. At maximum capacity, the machine can insert six threads containing 192 electrodes per minute.

 Neuralink demonstrates its next generation brain machine interface

A single-use bag attaches with magnets around the machine’s head to maintain sterility and allow for cleaning, and angled wings around the inner facade ensure a patient’s skull remains in place during insertion. The machine’s “body” attaches onto a base, which provides weighted support for the entire structure, concealing the other technologies that enable the system to operate.

 Neuralink demonstrates its next generation brain machine interface

When asked about whether the prototype would ever make its way into clinics or hospitals, Mehin danced around the question, but noted that the design was intended for “broad-scale” use. “As engineers, we know what’s possible and how to communicate the design needs in an understandable way, and likewise, Neuralink’s team is able to send over highly complex schematics that we can run with,” he said. “We imagine this is a design that could live outside of a laboratory and into any number of clinical settings.”

The Link

As Neuralink detailed last year, its first in-brain interface designed for trials — the N1, alternatively referred to as the “Link 0.9” — contains an ASIC, a thin film, and a hermetic substrate that can interface with upwards of 1,024 electrodes. Up to 10 N1/Link interfaces can be placed in a single brain hemisphere, optimally at least four in the brain’s motor areas and one in a somatic sensory area.

Musk says the interface is dramatically simplified compared with the concept shown in 2019. It no longer has to sit behind the ear, it’s the size of a large coin (23 millimeters wide and 8 millimeters thick), and all the wiring necessary for the electrodes connect within a centimeter of the device itself.

During the pig demo, the pig with the implant — “Gertrude” — playfully nuzzled its handlers in a pen adjacent to pens containing two other pigs, one of which had the chip installed and later removed. (The third pig served as a control; it hadn’t had a chip implanted.) Pigs have a dura membrane and skull structure that’s similar to that of humans, Musk explained, and they can be trained to walk on treadmills and perform other activities useful in experiments. This made them ideal guinea pigs — hence the reason Neuralink chose them as the third animals to receive its implants after mice and monkeys.

 Neuralink demonstrates its next generation brain machine interface

Above: Elon Musk holding a prototype neural chip.

Image Credit: Neuralink

The electrodes relay detected neural pulses to a processor that is able to read information from up to 1,536 channels, roughly 15 times better than current systems embedded in humans. It meets the baseline for scientific research and medical applications and is potentially superior to Belgian rival Imec’s Neuropixels technology, which can gather data from thousands of separate brain cells at once. Musk says Neuralink’s commercial system could include as many as 3,072 electrodes per array across 96 threads.

The interface contains inertial measurement sensors, pressure and temperature sensors, and a battery that lasts “all day” and inductively charges, along with analog pixels that amplify and filter neural signals before they’re converted into digital bits. (Neuralink asserts the analog pixels are at least 5 times smaller than the known state of the art.) One analog pixel can capture the entire neural signals of 20,000 samples per second with 10 bits of resolution, resulting in 200Mbps of neural data for each of the 1,024 channels recorded.

 Neuralink demonstrates its next generation brain machine interface

Above: Neuralink’s N1/Link sensor, shown at Neuralink’s conference in 2019.

Image Credit: Neuralink

Once the signals are amplified, they’re converted and digitized by on-chip analog-to-digital converters that directly characterize the shape of neuron pulses. According to Neuralink, it takes the N1/Link only 900 nanoseconds to compute incoming neural data.

The N1/Link will pair wirelessly via Bluetooth to a smartphone up to 10 meters through the skin. Neuralink claims the implants will eventually be configurable through an app and that patients might be able to control buttons and redirect outputs from the phone to a computer keyboard or mouse. In a prerecorded video played at today’s conference, the N1/Link was shown feeding signals to an algorithm that predicted the positions of all of a pig’s limbs with “high accuracy.”

One of Neuralink’s aspirational goals is to allow a tetraplegic to type at 40 words per minute. Eventually, Musk hopes Neuralink’s system will be used to create what he describes as a “digital super-intelligent [cognitive] layer” that enables humans to “merge” with artificially intelligent software. Millions of neurons could be influenced or written to with a single N1/Link sensor, he says.

Potential roadblocks

High-resolution brain-machine interfaces, or BCI for short, are predictably complicated — they must be able to read neural activity to pick out which groups of neurons are performing which tasks. Implanted electrodes are well-suited to this, but historically, hardware limitations have caused them to come into contact with more than one region of the brain or produce interfering scar tissue.

That has changed with the advent of fine biocompatible electrodes, which limit scarring and can target cell clusters with precision (though questions around durability remain). What hasn’t changed is a lack of understanding about certain neural processes.

 Neuralink demonstrates its next generation brain machine interface

Above: The N1/Link’s capabilities.

Image Credit: Neuralink

Rarely is activity isolated in brain regions, such as the prefrontal lobe and hippocampus. Instead, it takes place across various brain regions, making it difficult to pin down. Then there’s the matter of translating neural electrical impulses into machine-readable information; researchers have yet to crack the brain’s encoding. Pulses from the visual center aren’t like those produced when formulating speech, and it is sometimes difficult to identify signals’ origination points.

It’ll also be incumbent on Neuralink to convince regulators to approve its device for clinical trials. Brain-computer interfaces are considered medical devices requiring further consent from the FDA, and obtaining that consent can be time-consuming and costly.

Perhaps anticipating this, Neuralink has expressed interest in opening its own animal testing facility in San Francisco, and the company last month published a job listing for candidates with experience in phones and wearables. In 2019, Neuralink claimed it performed 19 surgeries on animals and successfully placed wires about 87% of the time.

The road ahead

All these challenges haven’t discouraged Neuralink, which has over 90 employees and has received $ 158 million in funding including at least $ 100 million from Musk. However, they’ve potentially been exacerbated by what STAT News described in a report as a “chaotic internal culture.” Responding to the story via a New York Post inquiry, a Neuralink spokesperson said many of STAT’s findings were “either partially or completely false.”

While Neuralink expects that inserting the electrodes will initially require drilling holes through the skull, it hopes to soon use a laser to pierce bone with a series of small holes, which might lay the groundwork for research into alleviating conditions like Parkinson’s and epilepsy and helping physically disabled patients hear, speak, move, and see.

That’s less far-fetched than it might sound. Columbia University neuroscientists have successfully translated brain waves into recognizable speech. A team at the University of California, San Francisco built a virtual vocal tract capable of simulating human verbalization by tapping into the brain. In 2016, a brain implant allowed an amputee to use their thoughts to move the individual fingers of a prosthetic hand. And experimental interfaces have allowed monkeys to control wheelchairs and type at 12 words a minute using only their minds.

“I think at launch, the technology is probably going to be … quite expensive. But the price will very rapidly drop,” Musk said. “We want to get the price down to a few thousand dollars, something like that. It should be possible to get it similar to LASIK [eye surgery].”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Intelligent Automation On A Next-Generation ERP

April 18, 2020   BI News and Info

Tech Unknown | Episode 6 | Season 2

Featuring guests Tim Crawford, Isaac Sacolick, Andreas Welsch, and Timo Elliott with host Tamara McCleary

Subscribe: Apple Podcasts | Stitcher | Google Play

It’s natural for people to be mistrustful of technology, especially when it comes to automation in the workforce. 

But let’s ask a simple question: If you had to harvest an acre of wheat in an hour, would you rather use:

a) A sickle?

b) A combine harvester?

Most of us would opt for the latter: do the work you can, and let the machines handle the heavy lifting.

Now, imagine you had to process 50 terabytes of business data for your quarterly report. Would you rather use:

a) A spreadsheet?

b) RPA, machine learning, and AI?

Intelligent automation is poised to do for knowledge work what the combine harvester did for farming. Data-heavy jobs will be less repetitive and more efficient, but will undoubtedly require new skillsets and processes to keep up with the tech.

This episode, we hop in Tamara McCleary’s time-traveling Tesla to explore the history and future of work. We ask the experts how business leaders can prepare for intelligent automation, both on the technological side and the human side.

Listen to learn:

  • How to get buy-in across the organization for your intelligent automation initiative
  • How intelligent ERP can serve as the backbone for automation
  • What businesses can accomplish with intelligent automation in place
  • What skills and mindsets to train and hire to prepare for intelligent automation

Discover how intelligent ERP leverages artificial intelligence and automation to transform your business processes – in the cloud or on-premises.

About our guests:

TimCrawford ep4 Intelligent Automation On A Next Generation ERP

Tim Crawford is the CIO strategic advisor at AVOA, advising business executives on strategic IT transformation initiatives. He also hosts the CIO in the Know podcast.

“With intelligent ERP… you get to take advantage of newer technologies that are coming down the road when they become available, which is a huge improvement over the traditional approach, which could mean a 5–7 year lag in adopting new technology.” –Tim Crawford

TimoEliott 480x480 1 Intelligent Automation On A Next Generation ERP

Timo Elliott is the global innovation evangelist at SAP. He has spent over 30 years presenting to business and IT audiences in over 58 different countries, talking about digital transformation, AI, analytics, and the future of digital marketing. He also blogs at timoelliot.com.

“The history of computing is absolutely about augmenting human intelligence… I really think we’re at the start of a new golden age for knowledge workers.” –Timo Elliot

Isaac Sacolick 150x150 1 Intelligent Automation On A Next Generation ERP

Isaac Sacolick is the president and CIO of StarCIO and the author of Driving Digital: The Leader’s Guide to Business Transformation through Technology. 

“The way we used to do things doesn’t work anymore. We need more accurate data, we need to be looking at forecasting more frequently, and the volumes of data we’re looking at are bigger.” –Isaac Sacolick

Andreas Welsch 150x150 1 Intelligent Automation On A Next Generation ERP

Andreas Welsch is the head of intelligent processes, SAP S/4HANA Product Management. He holds a PhD in information systems from the Technische Universität Darmstadt.

“Expediting many steps through a concerted effort of different intelligent technologies along an end-to-end process: That’s where I see the key value that we can bring in ERP.”–Andreas Welsch

Did you miss our last episode?

Check out our previous episode with guests Carla Gentry, Iver van de Zand, and Timo Elliott: “Intelligent Analytics: The Search For Hidden Treasure In Your Business Data.” Click here to listen.

Episode 6 Transcript:

Tamara McCleary: Welcome to Tech Unknown, a podcast to prepare your organization for the tech-centered future of business. I’m Tamara McCleary, CEO of Thulium.

Our big umbrella topic this season is data. We’re digging into how sharing data across the organization – collecting, processing, and analyzing it – can increase efficiency, reduce costs, and improve customer service. 

This episode, we’re going to dig into intelligent automation. Here’s Tim Crawford, CIO strategic advisor at Avoa, with an overview:

Tim Crawford: A modern ERP solution needs to have a couple of components, one of which is that it’s leveraging some of the newest technology available on the market today. Things like machine learning, artificial intelligence, RPA. All of these really bring together a combination that will really help out companies. They need to look for solutions that will address not just the problems that they have today, but also try and project out and understand what are the problems they’re going to have to address in the future.

Tamara: Before we dive in too deep, let’s start by defining some terms Tim introduced. When we talk about intelligent automation, we’re talking AI, machine learning, and robotic process automation, or RPA. Intelligent automation is unique to “knowledge workers”: that is, people who work with data and information, rather than tangible materials. 

With those definitions in mind, we can start exploring how automation will change the job description for “knowledge workers.” The easiest way to do that – provided you have a magical podcast time machine – is to see how physical automation changed things for manual workers.

Let’s just fire up my time machine – it’s my Tesla, not Doc and Marty’s DeLorean. I’ve made just a few modifications to my Model X, and I’m ready to fire up this bad boy! 

Ooooh, this is interesting! Here we are back in the pre-industrial era to see how people worked. Back then… uhhhh, I mean, back now… EVERYTHING was handmade, one at a time. Simple items we take for granted, like a pair of shoes, would take a single shoemaker hours of time to complete.

Cobbler: Here is thy right shoe, Milady. In but two fortnight’s time, I shall complete the left.

Tamara: Thank you, my… uh, lord? Sir Cobbler? Whatever. You get the picture. Having one person handle every tiny task it takes to make a shoe… well, it’s just not scalable. But as technology evolved, we were able to break down shoemaking into its individual tasks, create machines for many of them, and humans just fill in what the machines can’t do. Now we can turn out thousands of pairs of shoes a day, instead of one every two weeks. No offense, Sir Cobbler.

Cobbler: Thousands of shoes a day, you say? How could there be that many feet in the whole of the earth? And what, pray tell, is this black metal book thou hast?

Oh, this? It’s called a laptop computer… it, uh, well, you know what? Never mind, you’re right. It’s a lovely metal book the blacksmith down the road made for me. I just love metal, don’t you? Okay, I have to run now, bye-bye, thanks!

We’d better get out of here before I step on a butterfly or something and screw up our entire existence… 

Oh, shoot!  Dang it! I left my metal book, I mean my laptop back at the cobbler’s shop. Uh… no worries, it’ll be fine.

Okay, I digress… anywho, the point of that trip through time wasn’t just to freak out a cobbler – it was to highlight just how long a history humanity has with automation. Our latest evolution, using technology to automate knowledge work, is just one more step forward.

Here’s how Timo Elliot, global brand evangelist for SAP, described it in our first episode:

Timo Elliot: The history of computing is absolutely about augmenting human intelligence. Actually forget computing, the history of science. I actually find this fascinating. If you imagine agriculture, a long time ago, people were really restricted to what one person could do in a field. You know, you could try and plow, you had oxen to help you, but now one person with a tractor can do an unimaginable amount of farm work. I think we now have the opportunity to do the same thing but for knowledge workers. Clearly, knowledge work is very valuable, I believe that human beings are the most powerful technology we have. And the ability for these latest technologies, like artificial intelligence, to expand on what an individual can do, I really think we’re at the start of a new golden age for knowledge workers.

Tamara: Even as we acknowledge that intelligent automation is a logical next step, however, it’s natural for people to feel uneasy. Change can be hard, and scary, and nobody wants to think about losing their job to a robot – OR a piece of software.

So, it’s up to business leaders to not only invest in the technology for intelligent automation but also to help people embrace that technology and make the most of it. 

Here’s a great tip for helping employees through the process from Isaac Sacolick, president of StarCIO and author of the book Driving Digital.

Isaac Sacolick: When you use the word intelligent automation, I think that works well when we’re talking about it as a platform and as a capability. I think it works terribly inside the organization. I think when the organization hears automation, it triggers two reactions. With the staff, it’s job loss, it’s change, it’s, “I gotta learn a whole new set of skill sets,” it’s, “What am I gonna be doing in the future?” and it’s, “Can this system actually do everything that I know how to do, and all the best practice, and all my expertise, and do it as well as I can?”

So, I think automation is a really bad word for CIOs to be using. And so, I think what you really need to do is focus on where people are needed, right? So, not what you’re not doing anymore, but what you will be doing, where your expertise is welcome, you know, what the new model is gonna look like, and how are you gonna serve customers differently, or how are you gonna be able to do things faster. Or maybe even it’s, you know, how are you gonna have a better work-life balance because you’re gonna be doing a lot more intelligent workarounds processing analytics or doing decisions that the machine can’t do very well, that you’re gonna be part of a learning organization that learns and changes with the advent of new technologies and what their capabilities are. So, I think that’s how you overcome things and start getting more alignment and buy-in and bringing some of the automation technologies into play.

Tamara: As Isaac is saying, it’s crucial to get executive and employee buy-in on the benefits of intelligent… you know… automation. Fortunately, there are plenty of potential benefits to help you make the case. Here’s Andreas Welsch, head of intelligent processes, SAP S/4HANA product management:

Andreas Welsch: So I think if we look at ERP historically, what we’ve seen that it has been and still is the backbone of many companies in their operations, right? Whether it’s planning, operations itself, or finance. And traditionally these areas are fairly labor-intensive, right? Finance, for example, if you think about a process called cash application in the area of accounts receivable, you may receive incoming payments and you need to match them to open invoices, it’s a pretty labor-intensive and tedious task that somebody needs to do on a daily basis. So through technologies, which are machine learning, AI, and robotics process automation, we’re able to increase automation in key business processes, end-to-end. And that’s where you see the key value of intelligent and emerging technologies being able to accomplish that.

Tamara: Perhaps one of the easiest ways to illustrate the value of next-generation ERP and automation is to see what happens when you don’t have it. Here’s Isaac with a cautionary Winter’s Tale:

Isaac: I walked into a CIO role a number of years ago. It was December. We had just finished our board meeting. We had projected a fairly good year. I walked in the next month for our first management meeting and heard the entire discussion around that essentially our forecasts were off, and instead of it being a plus year, it actually had been a down year.

And so, as the CFO was going through the story and try to understand the analytics around it, it was a mix of poor forecasting, poor data coming in at the wrong time, the effect of currencies. We were a global company, and there were currency issues that were miscalculated. And a lot of calculations done outside of the ERP and spreadsheets that just had bad formulas and bad mechanics around them. You know, that’s a telltale way of saying the way we used to do things doesn’t work anymore; we need more accurate data, we need to be looking at forecasting more often, and more frequently, and more accurately, the volumes of data that we’re looking at are bigger.

Tamara: Isaac clearly illustrates the problems we’re trying to solve here: The complexity of data flowing through the organization, the potential for human error, wasted effort… and above all, the sheer <yells> VOLUME…

Sorry, the sheer volume of data involved. 

Here’s Andreas with an example of how intelligent automation can meet these challenges.

Andreas: So these technologies work best when they’re intertwined, when they are combined. And one example that I can give you is, for example, we’ve been working with a customer in life science for quite some time now. Specifically on this process called cash application. And for them, it was key to get more automation out of their existing process where they already had some rules that they have managed and maintained for some time but they were only getting so far.

So to get the last mile, so to speak, they were looking, inquiring a new way of doing that with machine learning and artificial intelligence. And in that example, we are training a machine learning model based on historic information of payments that have been received and how they were matched to open invoices. And we applied this to new payments that are coming in to give a recommendation of how these payments should be matched. And if this machine learning model has such a high confidence that this lump sum payment should be assigned to these five payments, well, then let’s go ahead and let’s have the system automatically clear and apply this payment. And then the cash application analysts only need to look at those payments that the system was not able to clear automatically. So that’s one point.

Extending from that, and extending to a robotics process automation, in that process customers receive so-called payment advices, where their customer tells them, “Hey, next week I’m going to pay you $ 100,000 for these five invoices.” Now, today, somebody needs to go to a portal, download this PDF, upload it into ERP, and then kick off the next step in that process. Using RPA, we’re able to log into this portal automatically, download the invoice, upload it to ERP, and then kick off the cash application process. So driving more end-to-end automation through a combination of different technologies.

So in the example of this customer, what it translated to was that they were able to accelerate their quarter-end close in one of their key markets by more than 50%. So talk about that as success and benefits of automation. Secondly, what they found was that for some payments, where it has taken the accountants over 100-120 days to make these matches correctly, it’s taken the machine learning-based system cash application only a single day and one run to do that. So where that has helped our customer specifically is by reducing the amount of working capital needed and by optimizing one of the key KPIs in finance.

Tamara: Andreas’ example shows just how much potential this technology has for businesses who fully implement it. 120-day processes reduced to a SINGLE day! 50% faster quarter-end closes! 

But intelligent automation is about more than just time saved. It’s about what businesses can DO with that saved time. Automation on a next-generation ERP makes businesses more agile, more flexible, more capable to invest their time exploring innovative new offerings and lines of business.

Here’s Isaac again on the transformative possibilities:

Isaac: I wrote my book Driving Digital all about how organizations need to think about transformation and what’s the process around it, what’s the collaboration around it. And so, you know, I’ll go through a number of the things here. You know, starting with just transforming the business model, right? So, how you’re selling products today, what the pricing is around it, how they’re structured, what are you bundling, what are you selling separately, thinking about subscription models or even tiering models in terms of consumption.

These are all things… you know, I ran a lot of data businesses for those. I was the CIO in data businesses. And these were very difficult things to do, you know, to have skews for products that had multiple configurations and multiple ways of going to market and different ways of doing subscriptions around. It was really hard to do five, 10 years ago, and now they’re more commonplace. So, we’re starting with, you know, how do you actually transform the business model so that you can get recurring revenue streams and the consumption sort of matches up with your spend, and so that you, you know, customers have a better idea of what they’re buying. So, I think that’s, you know, one big thing that the endpoint looks like.

I like looking at customer experience, and more specifically, you know, what new markets companies want to get into. And so, people talk to me about transformation, and it always looks like you’re adding more, you’re doing more things than you’ve ever done before. You’re adding more technologies, you gotta have more capabilities. You also have to look at, you know, what new markets you want to go into and what markets and products you wanna exit out of.

And, again, I think the ERP is the heart of, you know, what types of businesses are you profitable in or showing growth that you have a market to go after to sell into. And transformation is really a practice of doing that on a very recurring basis. Being agile and experimental. So, how do you throw lightweight ideas out there and bring them to market quickly, seeing what kind of revenue you’re driving from it, what kind of customer feedback you’re getting from it? So, I think that’s a big part of it.

Tamara: All of this sounds great, right? But I can almost hear your objections already. 

Voice: But Tamara, my organization is stuck with a legacy ERP and we just don’t have the support to make these types of major changes!

Tamara: Wow, yeah, did you hear that or was that just my suggestible imagination? Clearly I’ve been working alone and remotely for too long!

Voice: If only we had an expert who could explain how to get started with a digital transformation! 

Tamara: Yeah, if only… someone like… Andreas, perhaps?

Andreas: So I think what’s really key here is, as with many of the emerging technology projects, start small, you know, do your pilot, your proof of concept, get a solid understanding of how the technology, how the product works, get some quick wins, and then from there on [you] expand into surrounding areas. 

You’re not going to become an intelligent enterprise overnight. It takes some work. It takes some transformation. So starting with very well-defined steps, that you can also quantify, where you can quantify the benefits and then looking for things surrounding them, is key.

Tamara: Starting small and racking up quick wins makes a lot of sense. But there’s more to it than picking the right pilot program. When you move to a next-gen ERP, you have to really reexamine your relationship with the ERP.

ERP voice: Tamara, we need to talk. It’s not you, it’s me. I hope we can still be friends…

Tamara: Uh, yeah, NOT what I meant, ERP. Here’s Tim to explain.

Tim Crawford: One of the common questions that comes up today is: how do I go from a traditional ERP approach or thinking in that framework and move to a cloud-based approach with ERP systems? And that question can be a little overwhelming and a little daunting. One of the things that I found as a success factor for that move from traditional ERP to cloud-based ERP is with traditional ERP, the mentality was we changed the software to be customized to our business processes. Those that are finding success with cloud-based ERP solutions are actually flipping that on its head, and instead are saying, how do we use this as much as possible out of the box and customize it as little as possible.

Now that in itself may sound simplistic to those that have been initiated in the process of actually implementing ERP solutions. But it’s actually easier said than done because you’re talking about changing business processes to match that of a common software product. The benefit here is you then get to take advantage of newer technologies that are coming down the road when they become available, which is a huge improvement over the traditional approach, which could mean as much as a five- to seven-year lag between when new technology becomes available. And when it actually shows up in the enterprise available for users to take advantage of. It is huge.

Tamara: Essentially, it’s about letting the software’s capabilities evolve your business processes – not trying to make the software fit your existing workflows. With a  next-generation ERP, you’re not buying a set configuration that matches where your business is now. You’re investing in a living, continually developed solution that can grow and change with you. Instead of customizing before you buy, you can do it on the fly with add-ons, plug-ins, and API connectors.

Here’s Andreas again to wrap up what we’ve been talking about… wait… shoot, that audio clip is on my laptop. Which I left in the 1400s. Let’s pop back and grab it real quick.

Here we are back in our pre-industrial… wait… was that factory there last time? I hope I didn’t mess up the timeline too much.

Cobbler: Ah, Milady, it is good to see you again! I hath finished your right shoe! And these 1,000 additional pairs of shoes!

Tamara: Sir Cobbler! This is amazing! How did you scale up your production?

Cobbler: Well, I didst use the metal book you left here and discovered the principles of “intelligent automation.” Verily, I hath a complete picture of my supply chain, I hath automated my manual tasks, empowered my knowledge workers, and maximized my efficiency!

Tamara: Well, geez – uh, yeah… that’s just great. Say, I didn’t catch your name… what do they call you?

Cobbler: Well, my family trade is tailoring, and my Christian name is Charles, so I am known as Chuck of the Tailors! 

Tamara: Congratulations, Chuck of the Tailors. And hey, one more hint, since we’re meddling in the time-space continuum… leather shoes are great, but have you thought about canvas?

Cobbler: Hmmm… sturdy and breathable… canvas is a good material for shoes, yes! And we could make bright colors and emblazon them with a star.

Tamara: Absolutely. Good luck, Chuck of the Tailors. You can keep the metal book, just let me play this wrap-up from Andreas Welsch:

Andreas: So if we look at the full scope of opportunity, and we’ve talked about that a bit earlier on as well, it’s really that combination of different technologies into a concerted effort of driving end-to-end automation. So, for example, you can have your RPA bot that is checking your inbox for incoming payment advices, to stick with a finance example. When you do receive a payment advice, the bot automatically downloads it to your desktop, stores it there temporarily, opens up your screen for ERP, uploads the document there. That’s the RPA part. The second part that then kicks in is machine learning, for example, whereas with a form of intelligent OCR, if you will, OCR plus machine learning, we’re processing that PDF document, for example, extracting information from it, such as the payee, the amount, the date, currency, line item information, and storing this structured data in ERP.

Once it’s there, the next component using machine learning is triggered on a regular basis. In this example, a cash application where you match incoming payments to open invoices and augment this information with what we just extracted from our payment advices. Now, if there are any changes because somebody created a dispute and one of the invoices is no longer valid, we can alert an analyst to that fact, that there needs to be a dispute to be created. If in that process then, also we can kick off another bot that will send out an email automatically to this customer asking for additional information or a justification for the dispute. And lastly, monitor the inbox again for any incoming communication that we alert the analyst to for dispute resolution. So really, expediting many of these steps through a concerted effort of different intelligent technologies along an end-to-end process. That’s where I see the key value that we can bring in ERP.

Tamara: Intelligent automation isn’t a single technology or process – it’s a suite of solutions that all work together. First, there’s next-generation, intelligent ERP – that’s the foundational layer that makes everything possible. With the ERP as the backbone, you can add tools that use AI, ML, and RPA to automate processes, reduce human error, and increase efficiency. All of which leaves the humans free to continue innovating, using the ERP’s data-analyzing capabilities to drive that exploration.

And from there it’s experimentation, optimization, identifying new lines of revenue, new business models, and TOTAL GLOBAL DOMINATION! [echo]

Well, at the very least, you’ll have a digitally transformed organization that is ready to outpace the competition and shape the future of business.

Thanks for listening to Tech Unknown. And thanks to my guests Isaac Sacolick, Tim Crawford, Andreas Welsch, and Timo Elliott. Please subscribe on iTunes, Google Play, or wherever you listen to podcasts.

I’m Tamara McCleary and until next time: Stay sharp, stay curious, and keep exploring the unknown.

To learn more about intelligent ERP, go to s4hanaopp.com. You can also find a transcript of this episode and more at digitalistmag.com. And make sure to subscribe wherever you listen to podcasts.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Next-Generation Competence Center (Part III): Aligning Business And IT

August 1, 2019   BI News and Info

In my previous article in this series, I examined the question: “Where is the business going, and consequently, what should the role of IT be?”

In this article, I will share an approach to align the business and IT strategy in a way that will keep the promise of long-lasting corporate brand identity.

You can say, “Where business goes, IT follows.” The problem is that the “where” (i.e., a desired target state) is hiding the “who” (i.e., who you are and who you want to be by reaching a new destination).

This reminds me of an evergreen quote from Lucius Annaeus Seneca: “Our plans miscarry because they have no aim. When a man does not know what harbor he is making for, no wind is the right wind.”

The point is that the business and IT aims must be aligned, and the two identities should converge. The question is how to do that if what IT wants to do first is not necessarily what the business is asking for.

The conflicting identities of business and IT popped up a few months ago when I was breaking the ice with customers who were interested in managing their systems with a more agile IT competence center. They wanted this competence center to run and support the deployment of SAP S/4HANA, which was going live imminently.

I was looking for a model that would help them plan and build for their system landscape’s evolution. I ended up combining three key philosophies from very different sources:

  1. Logical levels learned in The Art & Science of Coaching training, which provides the structure for moving from an inspiring vision to more concrete actions in specific times and places
  1. The corporate brand identity matrix, from the Harvard Business Review, to find the “unique twist” of a company’s identity based on its brand or core values
  1. Diversity and inclusion concepts learned from SAP’s employee training, which adds the “fairness” spices

Starting with “logical levels”

The student guide for the Erickson Academy’s The Art & Science of Coaching shows a pyramid that elegantly sorts the structure and dependencies of “five plus one” logical levels:

Vision is the “plus one” level of the pyramid, which inspires identity (the top logical level) rooted in core values representing the reason specific behaviors (skills and actions) define the different ways to play in a given context.

Those general concepts also work well for business and IT people working together in alignment for a common vision, mission, and strategy:

  1. Identity: Who are you now? What sort of person/organization would you rather be? (Shifting into a new role)
  1. Values: Why is this important? What values does it have? (Values behind identity and vision)
  1. Skills: How will you achieve it? What capabilities do you have? What skills do you need to develop? (Knowledge, experience, methods, and tools)
  1. Actions/behaviors: What actions need to be taken? What steps could you take to support X? (Action plan, steps, behaviors)
  1. Environment: Where will you want this? When will you do it? (Time and geography)

Deriving IT mission, vision, and strategy from “corporate and brand identity”

Stephen A. Greyser and Mats Urde, in their HBR article “What Does Your Corporate Brand Stand For?” (issue 97, Jan./Feb. 2019), illustrate a framework for a corporate brand identity definition.

The brand core is at the center of the framework, surrounded by eight elements, making room for nine questions to be answered.

alfredo blog 1 Next Generation Competence Center (Part III): Aligning Business And IT

After reading this article, I thought of my current next-generation competence center design project and decided to first answer the corporate identity questions from the business point of view, then answer the same questions from the IT point of view.

My IT counterparts were dubious after the first attempt to answer the questions from the IT point of view. Later, when I insisted on doing this exercise in a timebox fashion (I stepped out of the room for less than an hour), I heard excitement and realized how well they did in filling out different forms twice, based on the different points of view.

The first time you answer the nine questions from both points of view – business (as suggested by HBR) and IT (for the sake of the next-generation IT competence center) – your answers might look sloppy or disconnected. This is because consistency must be checked along four directions, aimed at enhancing four “angles.”

The clearer and more logical your definitions (answers) and narrative (combination of answers), the more consistent the identity matrix and the stronger your identity will be. That’s provided that all the four directions are properly crossing the very same core values, i.e., core brand.

Alfredo blog 2 Next Generation Competence Center (Part III): Aligning Business And IT

In short, check the nine answers in clusters of four, as follows:

  1. Strategy: Is the mission (what you promise) consistent with where you want to be (vision)?
  1. Competition: Is what you offer (your value proposition) unique due to particularly distinctive skills?
  1. Interaction: Does the way you interact (your relationships) delight your customers thanks to uncommon behaviors rooted in a well-known culture?
  1. Communication: Is your communication style fostered by unique personality traits?

Repeat this process a few times and try to discard concepts that don’t contribute to clear answers. Sometimes less is more.

Adding “diversity and inclusion”

Building a culture of diversity and inclusion plays an important role in aligning business and IT strategy.

Diversity and inclusion training can stretch minds, enrich vocabulary, and enhance the ability to think differently. It will create a balance between individual and collective culture and help you achieve measurable targets, fueling a new, more open, and correct way of working.

Here are some things to consider when building a culture of diversity and inclusion:

  1. Culture elements: Shared values, knowledge, experiences, beliefs, and behaviors
  1. Culture transmission: Socializing agents (people we are with) and institutions (school, government)
  1. Cultural attitudes: Role models made of different attributes, to be tuned up to better fit a given target cultural attitude:

    • Status: Rank-oriented (do what the boss commands) vs. equality-oriented (challenging/arguing with boss is OK)
    • Identity: Individual-oriented (what I/he did) vs. group-oriented (the result we made)
    • Activity: Task-oriented (duty first) vs. relationship-oriented (people first).
    • Risk: Risk-taker (change-driven) vs. stability-seeking (steady state is better)
    • Communication: Direct (talk/write clearly – facts only) vs. indirect (room for interpretation – body language)

Thinking more and deeply about who we are and who we want to be is always good for us and for the people who want to take action.

Are you intrigued or skeptical?

I’ll be happy to hear your comments and adjust the recipe for aligning business and IT strategy.

Stretch your mind on the Next-Generation Competence Center by reading my previous articles on the topic, “Part I” and  “Part II.”

Let’s block ads! (Why?)

Digitalist Magazine

Read More

How Change Data Capture Can Power Your Next-Generation Analytics and Mobile Applications

September 11, 2018   Big Data
How Change Data Capture Can Power Your Next Generation Analytics and Mobile Applications How Change Data Capture Can Power Your Next Generation Analytics and Mobile Applications
Ashwin Ramachandran avatar 1536610111 54x54 How Change Data Capture Can Power Your Next Generation Analytics and Mobile Applications

Ashwin Ramachandran

September 11, 2018

Data integration tools excel at getting an enterprise’s disparate data from systems like the mainframe, IBM i, relational databases and data warehouses to modern distributed platforms such as Hadoop. Oftentimes an organization begins its big data journey by addressing the challenge of “filling the data lake”. This involves sourcing many different data assets, standardizing their formats, assessing and improving the quality of the data, and getting them in a suitable store for analytics and other business initiatives.

Once this initial need is met, however, the challenge of keeping that data fresh can prove more difficult to overcome. Detecting changes to source data can potentially apply undue strain on the transactional systems that host that data. More importantly, those updates need to be made available to a variety of downstream consumers at application-specific delivery rates.

Keeping Data in Sync with Syncsort How Change Data Capture Can Power Your Next Generation Analytics and Mobile Applications

Distributed Messaging Framework

This growing requirement is prompting users to move away from point-to-point data replication topologies in big data environments. With many different downstream consumers of the data, the design process, maintenance, and overhead of delivering data in a point-to-point fashion is not manageable or scalable. Instead, there is an increasing drive towards adopting distributed messaging frameworks like Apache Kafka to serve as the backbone of the enterprise data architecture. Let’s look at an example of how this strategy may manifest at a fictional large bank named Gamma.

Gamma uses the mainframe as its system of record. All deposits, withdrawals, and changes to account details and status ultimately trigger a transaction to an underlying Db2 database running on IBM z/OS. Gamma is looking to do several things with this account data.

  1. Gamma wants to improve its customer experience and increase satisfaction by making account updates immediately available in the mobile app used by its clients. This data must be made available to the app immediately, to prevent a user from seeing a stale balance reported after she has made a withdrawal from an ATM, for example.
  2. In an effort to shift to a more customer-focused business, Gamma wants to perform customer segmentation analysis and deliver targeted marketing campaigns, using the daily transaction data from Db2 as one of several sources to power this initiative. The data will ultimately land in a store like Impala, where analytics will be run.
  3. Gamma is also concerned with fraud and money laundering detection, and therefore wants to leverage insights within the live transaction data to suggest corrective actions in cases of suspicious behavior. Since fraud detection must be quick and requires immediate action, this application is very sensitive to the rate at which changes can be made available to it.

Delivery of Data

All three of these use cases rely on fresh data coming from the live transactional systems of record. All three use cases require different delivery rates of that data to the downstream applications that require it. While application 2 requires periodic delivery, applications 1 and 3 require data as soon it is committed to the Db2 system of record. It becomes quite clear that performing point-to-point delivery of fresh data between Db2/z and each of these applications individually would become unwieldy, especially as new applications and use cases are deployed within Gamma’s big data environment. The publish/subscribe approach, on the other hand, is an architecture that is much better suited to this type of data delivery requirement. By using Kafka as the location to which live changes are replicated in real-time, each application can consume only what it needs as often as it requires that data.

This simple use case highlights how strategic initiatives originating from a business requirement—adoption of mobile technologies, customer-centric services, anomaly and fraud detection—are driving enterprises to adopt new technologies and data delivery paradigms as a centerpiece of their IT strategy. Ultimately, an organization’s ability to liberate its valuable data assets and leverage these treasure troves of unmined insight will determine how well it can deliver value to its customers beyond that delivered by its fiercest competitors.

Check out our webcast for some of the advantages and disadvantages of various change data capture strategies.

Let’s block ads! (Why?)

Syncsort Blog

Read More

Alphabet investment arm GV backs SpyBiotech, an Oxford University spinout working on ‘next-generation’ vaccines

April 2, 2017   Big Data
 Alphabet investment arm GV backs SpyBiotech, an Oxford University spinout working on ‘next generation’ vaccines

SpyBiotech, a life sciences spinout from the U.K.’s Oxford University, has raised £4 million ($ 5 million) in seed funding from Oxford Sciences Innovation (OSI) and Alphabet investment arm GV (formerly Google Ventures).

As traditional approaches to developing vaccines are typically time-consuming, and not always effective, SpyBiotech is working on what it calls a “proprietary protein superglue” technology. Called SpyTag/SpyCatcher, it makes it possible to create vaccines “more quickly, cheaply, and effectively,” according to the company.

A clue to the company’s raison d’être comes from its name, which borrows from the species of bacteria known as Streptococcus pyogenes (Spy), which many people may know from common infections such as strep throat. The company split Spy into a peptide (SpyTag) and protein partner (SpyCatcher). But when separated the two have an overwhelming urge to rejoin each other — and this is the basis of the biochemical superglue that SpyBiotech believes is the “missing link” in the development of effective vaccines.

The company will initially focus on infectious diseases, including viral infections, but it plans to develop its technology further and apply it to “a wide variety of conditions,” covering any number of future outbreaks and pandemics. The funding will be used to initiate phase I trials, and the company is already planning another round of funding “in the near future” to work in other areas.

“Researchers in the vaccine field, including us, have struggled to make effective VLPs against many diseases for a long time,” said Sumi Biswas, associate professor at the Jenner Institute at Oxford University. “We view this superglue technology as a game changer to enable faster development of effective vaccines against major global diseases. We are excited to begin the journey of taking this versatile and innovative approach forward and moving our new vaccines from the laboratory to human clinical testing.”

GV has a long history of investments in life sciences, with around half of its European investments involving health-tech companies.

“SpyBiotech has established a novel approach using platform VLP vaccine technology that shows promise in a number of addressable markets,” noted GV partner Tom Hulme. “We’re looking forward to working with a team of world-class scientists with extensive experience in vaccine development — spanning from vaccine design through to Phase II clinical trials — to develop more effective vaccines for a wide range of global diseases.”

GV isn’t the only Alphabet offshoot investing in life sciences. Back in January, Alphabet’s health-focused technology arm Verily Life Sciences (“Verily”) announced that it had received an $ 800 million investment from Singapore-based investment firm Temasek. Verily is setting out to build technology to “better understand health” and to prevent and manage diseases.

Alphabet actually has another health-focused subsidiary, called Calico, which was founded in 2013 by Google and Arthur D. Levinson as a biotech research entity with a focus on extending human life.

Elsewhere, Mark Zuckerberg and his wife Priscilla Chan, through The Chan Zuckerberg Initiative, recently announced a new $ 3 billion program designed to cure, prevent, or manage “all diseases” within their children’s lifetime.

Sign up for Funding Daily: Get the latest news in your inbox every weekday.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Chatbots, IoT implementation drive SAP next-generation CRM

February 22, 2017   BI News and Info

[unable to retrieve full-text content]

As CRM chatbots improve, as well as IoT implementations to support customer service, SAP users find new uses for the tech, thanks to tighter integration to S/4HANA.

SearchCRM: News on CRM trends and technology

Read More

Next-Generation Machine Learning Ushers In Innovations

November 3, 2016   SAP

During the famine in northern Ethiopia in the 1980s, governments and humanitarian agencies from around the world poured food into the region at great expense. Despite their efforts, which were hampered by a civil war in Ethiopia, 400,000 people died.

Compounding the tragedy was the fact that southern Ethiopia had surplus food that never made it north. “In every major disaster, the resources needed to respond may be available locally, but because of the inability to communicate accurate needs and offers of resources, needed items such as food are often shipped halfway across the world at high costs,” says Gisli Olafsson, a humanitarian advisor to NetHope, an organization that connects nonprofit organizations with technology innovators.

Today, governments, nongovernmental organizations, and public and private sector entities have the ability not only to track aid but also to determine the best way to deliver it to conflict zones and fragile states.

For example, the World Food Programme (WFP), the largest humanitarian agency in the world, delivers over 3 million metric tons of food annually, which feeds about one-tenth of the hungry people in the world, as reported in the WFP’s 2015 annual report. As was the case in Ethiopia, however, supply isn’t always the issue. Sometimes there’s plenty of food available locally, but people can’t afford to pay for it. Aid may also become fodder for the black market rather than food for children.

To improve access to food and other nutritional needs, the WFP started using electronic vouchers and digital cash several years ago. Since then, the WFP reports, it has distributed more than US$ 1 billion in aid through digital means to those in need. It’s part of an effort by humanitarian organizations and governments to reinvent aid delivery in the digital economy.

sap Q316 digital double feature2 images1 Next Generation Machine Learning Ushers In Innovations

The need for change is indisputable. Despite government and private humanitarian contributions that totaled $ 28 billion in 2015, as reported by Reuters, 25 million people still need assistance, according to a February 2016 United Nations report, One Humanity: Shared Responsibility. Organizations involved in aid delivery are responding by using technology to help locate those in need faster, zero in on their specific need, speed delivery, and reduce losses from corruption and thievery.

“Technology is driving new means of delivering humanitarian aid in ways we could never before achieve. It’s amplifying what the world can do,” says Olafsson. Innovations in aid delivery are also part of achieving the United Nations’ 17 Sustainable Development Goals initiative, which includes ending poverty, feeding the hungry, and fighting diseases.

Electronic Identification: Making Sure All People Count

Before people can receive even the most basic human services, they first must be identified—yet many in this world have not been. For example, Unicef reports that the births of nearly 230 million children under the age of five have not been officially recorded. Without an identity, these children are invisible, excluded from basic human rights, such as healthcare, social benefits, and education, as well as from humanitarian aid that could save their lives.

Electronic identification solutions are now being used by governments and humanitarian organizations to help change the situation. For instance, several years ago, India launched an initiative to provide each citizen with a national identity number. The government has now issued more than 1.2 billion Aadhaar cards (covering more than 80% of the country’s population), which establish a unique 12-digit number for every Indian adult, child, and infant, according to a government report. The Aadhaar, which includes demographic and biometric information, provides a universal identity infrastructure that can be used by any identity-based application, such as banking, mobile, government, and other needed services.

Identification cards are a powerful way of demonstrating to people that there are benefits to being a part of the formal economy. “The efforts for universal identification are incredibly important,” says Carmen Navarro, a former project manager for financial inclusion at the World Economic Forum. “They will help provide the underserved with social benefits and other financial services products that they have never had access to before.”

Electronic Money: Increasing Effectiveness and Accountability

When humanitarian aid must be delivered in conflict zones and fragile states, organizations sometimes struggle to get aid into the hands of those who need it. Corruption and thievery can divert resources away from their intended recipients. Digital cash has become an important tool for thwarting the bad guys, especially in times of disasters or conflict, when aid must move quickly and can become more difficult to follow.

sap Q316 digital double feature2 images3 Next Generation Machine Learning Ushers In Innovations“In some situations, digital cash is the best emergency aid because it can be tracked, so organizations know how much aid is being administered and to whom. This ensures that aid is not being diverted to someone other than the intended recipient or resold on the black market,” says Kate Van Waes, policy director, agriculture and inclusive growth, at the ONE Campaign, a global antipoverty organization.

Digital cash also helps protect those delivering aid. “A digital form of payment reduces the use of cash, making transactions more transparent and safer,” says Navarro. She sees this in Latin America, where governments are encouraging the transfer of social benefits to digital form. “This limits the amount of cash that must be transported to very remote locations, which can be costly and dangerous,” she says.

Social Media: Improving Response Time and Accuracy

Social media has become a vital part of communicating aid needs after disasters, says NetHope’s Olafsson. “Social media allows people in existing social networks—whether it’s a community, a neighborhood, or a school—to amplify their connections,” he says. “This is critical in times of disasters, because now people can help each other, whether it’s preparing for an event or helping after.”

sap Q316 digital double feature2 images5 Next Generation Machine Learning Ushers In InnovationsPrivate sector and humanitarian organizations are now using Facebook pages to connect with people who have been affected by disasters, such as flooding and earthquakes. Government and community pages publish early warning notices of impending disasters, as well as updates on recovery efforts. For instance, within hours of a 2015 earthquake in Nepal, Mark Zuckerberg, Facebook’s CEO, posted on his Facebook page that his company’s Safety Check service was active, helping people in the region inform family and friends that they were safe.

As technology evolves, social media promises to have an even bigger impact on disaster relief and could radically change the dynamic of humanitarian response. People will be able to communicate their needs to aid organizations in the moment. “In the future, there will be a big shift in response efforts that is fueled by mobile phones, social networks, and other real-time communication,” Olafsson says.

“There will be a 180-degree turn, a shift from a government top-down approach to responding to disasters to a community bottom-up approach.”

IoT: Wiring Up to Predict Disasters

Over 100 million people were affected by disasters, such as floods, earthquakes, storms, heat waves, and drought, in 2014. Yet according to the One Humanity: Shared Responsibility report, only 0.4% of official development assistance was spent on disaster preparedness in 2014.

Today, Internet of Things (IoT) solutions are helping communities around the world get early warnings of impending disasters. One example is Buenos Aires, where flash floods over the past several years had taken lives and left people stranded without vital services. Today, the city is using planning, design, sensor, and analytics technologies to prevent flooding and provide better response.

With IoT sensors throughout the city’s water tunnels, Buenos Aires can now better anticipate and identify where the flooding risk is and better prepare for or fix the issue. The government can also use social media to engage with citizens and provide warnings, preparation advice, and instructions on what to do in case of emergency.

Data Transparency: Ensuring the Right Delivery of Funds

Data ensures greater transparency in the relationship between citizens and governments, which increases accountability and helps provide better aid for the unserved and underserved.

sap Q316 digital double feature2 images6 Next Generation Machine Learning Ushers In InnovationsFor instance, the International Budget Partnership’s Open Budget Survey 2015 reports that 98 out of 102 countries lack adequate systems for ensuring that public funds intended to support communities with basic needs such as education are used efficiently and effectively. The ONE Campaign has an initiative called Follow the Money, which creates greater accountability for government funds and less diversion from intended purposes. “All too often, money from Africa’s natural oil, gas, and mining resources ends up being wasted or, worse, stolen and used to buy luxury property in London, New York, or Paris rather than benefiting the poorest people,” says David McNair, director of transparency and accountability at the ONE Campaign.

Transparency around data is making it more difficult for aid money to disappear. For example, in rural community in Nigeria, over 400 students were crammed into two classrooms when funds to build a new school ran out during construction, according to the ONE Campaign. The community requested aid from the Nigerian government to complete the work but never heard back. Unbeknownst to the citizens, the government had approved the request but hadn’t released the allocated funds—that is, until BudgIT, a network of citizen activists, got involved. Using the organization’s Tracka technology, which tracks and publicizes capital projects in Nigeria, the community gained visibility into the government’s budget, the funds were freed up, and the building project was completed in 2015.

Sharing data between the public and private sectors can also speed up success. “For the delivery of basic services to truly be accelerated, collaboration between the public and the private sector is critical,” Navarro says. “Financial institutions, consumer goods companies, and telecommunication providers are a few of the key players here, as they essentially have very strong networks within the segments of the population that humanitarian aid efforts are targeting.”

sap Q316 digital double feature2 images7 Next Generation Machine Learning Ushers In InnovationsData Analytics: Knowing Who Needs Help and When

Aid organizations are trying to move beyond responding to humanitarian needs and begin predicting those needs. Real-time, broader-based data sets are already allowing organizations to better analyze current situations, measure progress to date, and gain keener insight into future trends. “The organizations that are instrumental in relief efforts often make blind decisions because they are using out-of-date information that is not representative of what is actually happening today,” says Olafsson.

Big Data analysis lets humanitarian organizations bring aid down to the personal level, says Navarro. “Behavioral data can feed the design of the right financial services products for the underserved and help ensure they are properly used,” she says. “Information from a simple savings or deposit account could help inform micro-insurance products and offers of credit.”

The Cloud: Creating a Humanitarian Ecosystem

With so many different humanitarian organizations operating independently around the world, there is often duplication of effort when delivering aid. This could change with the creation of a humanitarian ecosystem in which aid organizations share technology solutions and data through the cloud.

Humanitarian organizations all share the same challenges. They must do more with less money, they must better understand whom they are serving, and they must implement technology solutions that are interoperable with those of their counterparts. By pooling resources in a shared cloud, more funds could go to aid rather than to infrastructure costs.

“Once you open up technology and connectivity,” Olafsson of NetHope says, “you start enabling different ways of thinking. If technology can provide even just a 1% improvement in organizational efficiency, and if the duplication of efforts was eliminated, there could be large amounts of savings.”

sap Q316 digital double feature2 images8 Next Generation Machine Learning Ushers In InnovationsRestoring Dignity Faster

One of the biggest concerns for people who need humanitarian aid is that basic services that most of us take for granted are poor or don’t exist. Improved aid delivery helps change that.

Aid and government funds give people the opportunity to look beyond survival and start down the path toward prosperity. “A big part of prosperity is providing the space for people to have a voice and express an opinion over things that affect their lives,” says the ONE Campaign’s McNair. “It’s not just about access to more money. It’s about removing the constraints to living a fulfilled life, whether it’s freedom of speech, access to money, or an opportunity to learn a new trade.” D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Intel unveils next-generation Xeon Phi chips for A.I.

August 19, 2016   Big Data

Silicon Valley is full of chatter about artificial intelligence, deep learning neural networks, and machine learning. And Intel, the world’s biggest chip maker, is becoming a lot more conversant in that chatter today.

Intel executive Diane Bryant announced today that the company is working on a next-generation version of its high-end server chip, the Xeon Phi, for A.I. applications.

Baidu will use the upcoming Xeon Phi chips in the data centers it is building for its Deep Speech platform, where its networks will be able to parse natural language speech as quickly and accurately as possible.

By 2020, there will be more servers handling data analytics than any other workload, Bryant said.

Intel’s chips have been speedy number crunchers for the longest time. But in recent years, Nvidia’s graphics chips have become a lot more useful in servers dedicated to neural networks, which can process unstructured data such as video or speech and recognize patterns more easily.

To respond, Intel has started focusing more resources on central processing units (CPUs) that can handle more deep learning tasks. And Intel is betting that an improved CPU, and Xeon Phi in particular, is the answer. The new chips, code-named Knights Mill, will arrive in 2017.

DSCN9813 800x600 Intel unveils next generation Xeon Phi chips for A.I.

Above: Diane Bryant of Intel with Jing Wang of Baidu.

Intel also acquired Nervana, a San Diego, California-based deep learning startup, for more than $ 350 million last week. That team will help Intel on multiple levels with deep-learning cloud applications and a development framework. Jason Waxman, corporate vice president for cloud computing at Intel, said in an interview with VentureBeat that the Nervana team will be broadly useful for Intel’s A.I. efforts.

Intel argues that its Xeon Phi chips will run at “comparable levels of performance” to Nvidia’s graphics processing units. Of course, Nvidia begs to differ, and it said so in a blog post yesterday.

Bryant said that the improvement in performance with CPU-based processing is huge because the processor can access memory much faster, which is important as the size of the task scales upward.

Intel is also partnering with the National Energy Research Scientific Computing Center to optimize machine learning at huge scales.

Jing Wang, senior vice president at Baidu, said, “The next era is the era of artificial intelligence. It is technology that change people’s lives. Baidu is very excited about” using A.I. for speech and natural language processing.

DSCN9816 800x600 Intel unveils next generation Xeon Phi chips for A.I.

Above: The new Intel Xeon Phi

Image Credit: Intel

Get more stories like this on Twitter & Facebook

Intel is a world leader in computing innovation. The company designs and builds the essential technologies that serve as the foundation for the world’s computing devices. As a leader in co… All Intel news »

VB Profile Logo Intel unveils next generation Xeon Phi chips for A.I.Track Intel’s Landscape to stay on top of the industry. Access the entire ecosystem, track innovation & deals. Learn more.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Take Your Integration Forward with TIBCO’s Next-Generation Integration Platform

May 23, 2016   TIBCO Spotfire
Integration Blog Banner 2 Take Your Integration Forward with TIBCO’s Next Generation Integration Platform

I have read a lot of positive feedback coming from industry visionaries of TIBCO NOW 2016, including Sandy Kemsley and Ghislain Côté. While these cover the incredible announcements made by TIBCO at the event, what really exemplified the new TIBCO was the amazing sessions given by customers and partners that makes you really take a step back and think about how the integration landscape is changing and what you must consider. Failing to consider these and choosing a platform that does not address these points, may be a big step backward, jeopardizing your future success.

How does containerization change your approach to integration?

Listening to various breakout sessions from Allan Nima, Product Manager of Google Cloud Platform and David Potes, Partner Solutions Architect at Amazon Web Services, it is clear to me that containers are going to be a critical piece of any organization’s plans going forward, if they are not already. How will your integration platform fit into this emerging architectural approach?

What are the implications of 12-factor, cloud-native application design will have on your integration platforms? 

Multi-tenancy, twelve-factor app design approaches have introduced new requirements when building your applications. Simply leveraging an Integration service in the cloud does not mean the applications you build with your integration platform will follow these methodologies, you need to examine you integration technology to see if you need to approach your application design differently or even if your integration technology will support these approaches.

What is the role of the iPaaS, in this new API-first world?

While cloud service integration is still critical, listening to Pandu Ranga Swamy Prudhivi from AppS Associates talk about their API driven, contract-first approach to simplifying time-to-market for their clients’ project, it is clear that this new API Economy will be changing how users approach solving connectivity problems. The iPaaS platforms will be required to evolve to support this shift in customers’ new API focus.

How do you deal with real-time, event-driven mobile integration requirements?

In sessions and interviews with both Amit Sachdeva and Mike Thompson of Kony, a leader in the enterprise mobility space, a number of integration patterns were identified where the request/reply nature of HTTP is not suitable in a mobile environment (e.x. full duplex bidirectional communication, high volume, guaranteed delivery requirement, etc.). Will your integration platform be able to leverage asynchronous communication out to mobile devices to support these integration patterns?

Dealing with challenges of the Internet of Things

Dealing with the integration of these millions, or even billions, of devices is not just simply providing the connectivity. The connectivity is certainly important, but because of the volumes and even constraints around network connectivity or bandwidth availability, it makes sense to move some of these integration requirements out to the devices, where certain logic can be applied, data consolidated, and as a result less frequent, more relevant communications take place, eliminating many of the constraining bottlenecks facing IoT projects today.

But you still have the challenges of dealing with the ever increasing volumes of information IoT will generate. To capture the value of this information, will your integration platform offer you real-time streaming event processing or will you need to bring on another tool to address these challenges and will your integration platform work well with it?

Who will need to perform integration work?

Historically, integration was done within the core IT function, by the Integration Centers of Excellence. The iPaaS offerings simplified the process of interconnecting cloud-based services allowing developers within the lines of business or even in smaller, less sophisticated organizations to perform integration tasks. That is not enough anymore. These days, success is about empowering every user in the organization to drive change and disrupt the status quo. The integration technology you leverage should support integration performed by people who do not know programing, or for that matter do not know what integration is. It is about empowering the digital citizens, who simply know they need to connect to services or devices and share information.

TIBCO and its customers and partners demonstrated this past week the next-generation of integration that addresses all the above and more. TIBCO delivers integration to on-premises, containers, or to cloud platforms. We provide the shortest path from the creation of an API to its publication. We integrate legacy, microservices, APIs, devices, you name it. TIBCO caters to the center of excellence, the line of business and digital citizens. We interconnect everything. Don’t let your integration step backwards. Discover the next step forward for your integration here.

Additionally, if you are at the Cloud Foundry Summit, May 23 – 25, 2016 in Santa Clara, California or the Gartner Application Architecture, Development & Integration Summit 23 – 24 May 2016 in London stop by and see the TIBCO technology first hand.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Introducing the Next-Generation Salesforce1 Mobile App

November 27, 2014   Salesforce   No comments

 Introducing the Next Generation Salesforce1 Mobile AppHow many of you would be lost without your smartphone, both in your personal life and at work?

At Salesforce, we realize every one of our customers is actively defining and budgeting a mobile strategy so they can connect with their customers in whole new ways. In 2014, according to Accenture, 87% of C-level executives currently have a formal mobile strategy at the enterprise or unit level, and roughly one-third have the CEO directly involved.

What’s driving this powerful shift to mobile is our employees and customers instinctively reach for their smartphones first to get work done quickly. It has become our preferred way to work. It’s only after we realize we can’t get something done quickly through our mobile device do we fall back on more traditional methods of getting things done — making a phone call from our desk, setting up a meeting or visiting a website through our laptops. We realize if we can’t enable our employees and customers to work mobile first, our companies risk losing our competitiveness in the war for employee talent and high value customers.

This mobile-first mindset is anchored in our flagship Salesforce1 Mobile App. The Salesforce1 Mobile App is the one app with infinite possibilities, powered by our Salesforce1 Platform. It’s the one app that brings all your customizations forward onto all your mobile devices. Now, more than 85,000 organizations are using the Salesforce1 Mobile App as a key pillar of their mobile strategy.

Today we’re excited to share with you our latest update, version 7.0, to the Salesforce1 Mobile App—available on the Apple App Store today, and soon to be available on Google Play. This latest update to the Salesforce1 Mobile App not only brings forward the 200+ productivity feature enhancements made available in the Winter ’15 release, but also improves the mobile user experience to help our users work faster, using fewer taps, to run their business from their phones. Read on to learn more about five of my favorite enhancements:

1. Improved User Interface

We have retooled our user navigation to reduce the number of taps required to get work done. Filtering listviews means you can slice and dice any of your carefully curated list views to find exactly what you want — faster. Once you find what you’re looking for, every page is cached and stenciled to orient users — faster. The new, consolidated action bar with row-level actions allow users to complete work in fewer taps. The initial user feedback we’re hearing is what used to take 20 taps and 50 seconds to get something done is taking roughly 10 taps and 25 seconds — 2X faster!

2. Top 10 Requested Desktop Features

We paid careful attention to all of your user comments, with extra attention to the Top 10 most highly voted feature requests. We are proud to deliver popular desktop feature requests like lead convert, adding products to opportunities, sharing dashboards to Chatter and several other popular desktop features that users wanted pulled forward onto their mobile devices for easier access. The #1 most popular request was Salesforce Events, and now, with version 7.0, you can not only pull your events forward onto your mobile devices, but also create new events and log events from Today quickly from your phone.

3. Today

The Today app just got a lot more powerful. In addition to quickly logging events as activity history with a single tap, we also made Today the most productive destination to start your day with My Recent Records, Account News, and Weather — everything you need to start your day.

4. Windows and Blackberry Mobile Browser Access

Now, you can log in using mobile browsers on Windows 8.1 and Blackberry 10 mobile devices — meaning you can access Salesforce on the go from any device.

5. Salesforce1 Setup

We love all our users especially our most powerful type of user — our system administrators! In Winter ’15, we added a powerful new Salesforce1 Setup wizard to help system administrators quickly configure their navigation menu, action bar, and compact layouts with just a few easy taps.

These are just five out of more than 200 new features in the Winter ’15 release and our 7th-generation mobile app. Want to learn more? Check out our New Features page or visit the Salesforce Success Community. And don’t forget to download the Salesforce1 Mobile App for iOS today, under your “Updates” tab or by clicking the button below — and stay tuned for the Android release later this month.  

Lastly, let us know how you like the new app. Whether you leave comments on the App Store or on the Success Community, just know that we read all your comments and want your feedback. 

Clarence So is EVP, Salesforce1.

 Introducing the Next Generation Salesforce1 Mobile App

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Australia & NZ Blog

Read More
« Older posts
  • Recent Posts

    • Why Some CRM Initiatives Fail
    • Lucas Brothers To Write And Star In Semi-Autobiographical Comedy For Universal
    • NortonLifeLock’s AI-powered smartphone app blurs out sensitive information in photos
    • WHEN IDEOLOGY TRUMPS TRUTH
    • New Customer Experience Needs and Commerce Trends for 2021
  • Categories

  • Archives

    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited