• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Computing

Intel inks agreement with Sandia National Laboratories to explore neuromorphic computing

October 2, 2020   Big Data
 Intel inks agreement with Sandia National Laboratories to explore neuromorphic computing

Automation and Jobs

Read our latest special issue.

Open Now

As a part of the U.S. Department of Energy’s Advanced Scientific Computing Research program, Intel today inked a three-year agreement with Sandia National Laboratories to explore the value of neuromorphic computing for scaled-up AI problems. Sandia will kick off its work using the 50-million-neuron Loihi-based system recently delivered to its facility in Albuquerque, New Mexico. As the collaboration progresses, Intel says the labs will receive systems built on the company’s next-generation neuromorphic architecture.

Along with Intel, researchers at IBM, HP, MIT, Purdue, and Stanford hope to leverage neuromorphic computing — circuits that mimic the nervous system’s biology — to develop supercomputers 1,000 times more powerful than any today. Chips like Loihi excel at constraint satisfaction problems, which require evaluating a large number of potential solutions to identify the one or few that satisfy specific constraints. They’ve also been shown to rapidly identify the shortest paths in graphs and perform approximate image searches, as well as mathematically optimizing specific objectives over time in real-world optimization problems.

Intel’s 14-nanometer Loihi chip contains over 2 billion transistors, 130,000 artificial neurons, and 130 million synapses. Uniquely, the chip features a programmable microcode engine for on-die training of asynchronous spiking neural networks (SNNs), or AI models that incorporate time into their operating model such that the components of the model don’t process input data simultaneously. Loihi processes information up to 1,000 times faster and 10,000 more efficiently than traditional processors, and it can solve certain types of optimization problems with gains in speed and energy efficiency greater than three orders of magnitude, according to Intel. Moreover, Loihi maintains real-time performance results and uses only 30% more power when scaled up 50 times, whereas traditional hardware uses 500% more power to do the same.

Intel and Sandia hope to apply neuromorphic computing to workloads in scientific computing, counterproliferation, counterterrorism, energy, and national security. Using neuromorphic research systems in-house, Sandia plans to evaluate the scaling of a range of spiking neural network workloads, including physics modeling, graph analytics, and large-scale deep networks. The labs will run tasks on the 50-million-neuron Loihi-based system and evaluate the initial results. This will lay the groundwork for later-phase collaboration expected to include the delivery of Intel’s largest neuromorphic research system to date, which the company claims could exceed more than 1 billion neurons in computational capacity.

Earlier this year, Intel announced the general readiness of Pohoiki Springs, a powerful self-contained neuromorphic system that’s about the size of five standard servers. The company made the system available to members of the Intel Neuromorphic Research Community via the cloud using Intel’s Nx SDK and community-contributed software components, providing a tool to scale up research and explore ways to accelerate workloads that run slowly on today’s conventional architectures.

Intel claims Pohoiki Springs, which was announced in July 2019, is similar in neural capacity to the brain of a small mammal, with 768 Loihi chips and 100 million neurons spread across 24 Arria10 FPGA Nahuku expansion boards (containing 32 chips each) that operate at under 500 watts. This is ostensibly a step on the path to supporting larger and more sophisticated neuromorphic workloads. Intel recently demonstrated that the chips can be used to “teach” an AI model to distinguish between 10 different scents, control a robotic assistive arm for wheelchairs, and power touch-sensing robotic “skin.”

In somewhat related news, Intel today announced it has entered into an agreement with the U.S. Department of Energy to develop novel semiconductor technologies and manufacturing processes. In collaboration with Argonne National Laboratory, the company will focus on the development and design of next-generation microelectronics technologies such as exascale, neuromorphic, and quantum computing.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

D-Wave’s 5,000-qubit quantum computing platform handles 1 million variables

September 29, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

D-Wave today launched its next-generation quantum computing platform available via its Leap quantum cloud service. The company calls Advantage “the first quantum computer built for business.” In that vein, D-Wave today also debuted Launch, a jump-start program for businesses that want to begin building hybrid quantum applications.

“The Advantage quantum computer is the first quantum computer designed and developed from the ground up to support business applications,” D-Wave CEO Alan Baratz told VentureBeat. “We engineered it to be able to deal with large, complex commercial applications and to be able to support the running of those applications in production environments. There is no other quantum computer anywhere in the world that can solve problems at the scale and complexity that this quantum computer can solve problems. It really is the only one that you can run real business applications on. The other quantum computers are primarily prototypes. You can do experimentation, run small proofs of concept, but none of them can support applications at the scale that we can.”

Quantum computing leverages qubits (unlike bits that can only be in a state of 0 or 1, qubits can also be in a superposition of the two) to perform computations that would be much more difficult, or simply not feasible, for a classical computer. Based in Burnaby, Canada, D-Wave was the first company to sell commercial quantum computers, which are built to use quantum annealing. But D-Wave doesn’t sell quantum computers anymore. Advantage and its over 5,000 qubits (up from 2,000 in the company’s 2000Q system) are only available via the cloud. (That means through Leap or a partner like Amazon Braket.)

5,000+ qubits, 15-way qubit connectivity

If you’re confused by the “over 5,000 qubits” part, you’re not alone. More qubits typically means more potential for building commercial quantum applications. But D-Wave isn’t giving a specific qubit count for Advantage because the exact number varies between systems.

“Essentially, D-Wave is guaranteeing the availability of 5,000 qubits to Leap users using Advantage,” a D-Wave spokesperson told VentureBeat. “The actual specific number of qubits varies from chip to chip in each Advantage system. Some of the chips have significantly more than 5,000 qubits, and others are a bit closer to 5,000. But bottom line — anyone using Leap will have full access to at least 5,000 qubits.”

Advantage also promises 15-way qubit connectivity, thanks to a new chip topology, Pegasus, which D-Wave detailed back in February 2019. (Pegasus’ predecessor, Chimera, offered six connected qubits.) Having each qubit connected to 15 other qubits instead of six translates to 2.5 times more connectivity, which in turn enables the embedding of larger and more complex problems with fewer physical qubits.

“The combination of the number of qubits and the connectivity between those qubits determines how large a problem you can solve natively on the quantum computer,” Baratz said. “With the 2,000-qubit processor, we could natively solve problems within 100- to 200-variable range. With the Advantage quantum computer, having twice as many qubits and twice as much connectivity, we can solve problems more in the 600- to 800-variable range. As we’ve looked at different types of problems, and done some rough calculations, it comes out to generally we can solve problems about 2.6 times as large on the Advantage system as what we could have solved on the 2000-qubit processor. But that should not be mistaken with the size problem you can solve using the hybrid solver backed up by the Advantage quantum computer.”

1 million variables, same problem types

D-Wave today also announced its expanded hybrid solver service will be able to handle problems with up to 1 million variables (up from 10,000 variables). It will be generally available in Leap on October 8. The discrete quadratic model (DQM) solver is supposed to let businesses and developers apply hybrid quantum computing to new problem classes. Instead of accepting problems with only binary variables (0 or 1), the DQM solver uses other variable sets (integers from 1 to 500, colors, etc.), expanding the types of problems that can run on Advantage. D-Wave asserts that Advantage and DQM together will let businesses “run performant, real-time, hybrid quantum applications for the first time.”

Put another way, 1 million variables means tackling large-scale, business-critical problems. “Now, with the Advantage system and the enhancements to the hybrid solver service, we’ll be able to solve problems with up to 1 million variables,” Baratz said. “That means truly able to solve production-scale commercial applications.”

 D Wave’s 5,000 qubit quantum computing platform handles 1 million variables

Depending on the technology they are built on, different quantum computers tend to be better at solving different problems. D-Wave has long said its quantum computers are good at solving optimization problems, “and most business problems are optimization problems,” Baratz argues.

Advantage isn’t going to be able to solve different types of problems, compared to its 2000Q predecessor. But coupled with DQM and the sheer number of variables, it may still be significantly more useful to businesses.

“The architecture is the same,” Baratz confirmed. “Both of these quantum computers are annealing quantum computers. And so the class of problems, the types of problems they can solve, are the same. It’s just at a different scale and complexity. The 2000-qubit processor just couldn’t solve these problems at the scale that our customers need to solve them in order for them to impact their business operations.”

D-Wave Launch

In March, D-Wave made its quantum computers available for free to coronavirus researchers and developers. “Through that process what we learned was that while we have really good software, really good tools, really good training, developers and businesses still need help,” Baratz told VentureBeat. “Help understanding what are the best problems that they can benefit from the quantum computer and how to best formulate those problems to get the most out of the quantum computer.”

D-Wave Launch will thus make the company’s application experts and a set of handpicked partner companies available to its customers. Launch aims to help anyone understand how to best leverage D-Wave’s quantum systems to support their business. Fill out a form on D-Wave’s website and you will be triaged to determine who might be best able to offer guidance.

“In order to actually do anything with the quantum processor, you do need to become a Leap customer,” Baratz said. “But you don’t have to first become a Leap customer. We’re perfectly happy to engage with you to help you understand the benefits of the quantum computer and how to use it.”

D-Wave will make available “about 10” of its own employees as part of Launch, plus partners.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Baidu offers quantum computing from the cloud

September 24, 2020   Big Data
 Baidu offers quantum computing from the cloud

Automation and Jobs

Read our latest special issue.

Open Now

Following its developer conference last week, Baidu today detailed Quantum Leaf, a new cloud quantum computing platform designed for programming, simulating, and executing quantum workloads. It’s aimed at providing a programming environment for quantum-infrastructure-as-a-service setups, Baidu says, and it complements the Paddle Quantum development toolkit the company released earlier this year.

Experts believe that quantum computing, which at a high level entails the use of quantum-mechanical phenomena like superposition and entanglement to perform computation, could one day accelerate AI workloads. Moreover, AI continues to play a role in cutting-edge quantum computing research.

Baidu says a key component of Quantum Leaf is QCompute, a Python-based open source development kit with a hybrid programming language and a high-performance simulator. Users can leverage prebuilt objects and modules in the quantum programming environment, passing parameters to build and execute quantum circuits on the simulator or cloud simulators and hardware. Essentially, QCompute provides services for creating and analyzing circuits and calling the backend.

Quantum Leaf dovetails with Quanlse, which Baidu also detailed today. The company describes Quanlse as a “cloud-based quantum pulse computing service” that bridges the gap between software and hardware by providing a service to design and implement pulse sequences as part of quantum tasks. (Pulse sequences are a means of reducing quantum error, which results from decoherence and other quantum noise.) Quanlse works with both superconducting circuits and nuclear magnetic resonance platforms and will extend to new form factors in the future, Baidu says.

The unveiling of Quantum Leaf and Quanlse follows the release of Amazon Braket and Google’s TensorFlow Quantum, a machine learning framework that can construct quantum data sets, prototype hybrid quantum and classic machine learning models, support quantum circuit simulators, and train discriminative and generative quantum models. Facebook’s PyTorch relies on Xanadu’s multi-contributor project for quantum computing PennyLane, a third-party library for quantum machine learning, automatic differentiation, and optimization of hybrid quantum-classical computations. And Microsoft offers several kits and libraries for quantum machine learning applications.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Baidu open-sources Paddle Quantum toolkit for AI quantum computing research

May 27, 2020   Big Data
 Baidu open sources Paddle Quantum toolkit for AI quantum computing research

Baidu today announced Paddle Quantum, an open source machine learning toolkit designed to help data scientists train and develop AI within quantum computing applications. It’s built atop Baidu’s PaddlePaddle deep learning platform, and Baidu claims it’s “more flexible” compared with other quantum computing suites, reducing the complexity of one popular algorithm — quantum approximate optimization algorithm (QAOA) — by a claimed 50%.

Experts believe that quantum computing, which at a high level entails the use of quantum-mechanical phenomena like superposition and entanglement to perform computation, could one day accelerate AI workloads compared with classical computers. Moreover, AI continues to play a role in cutting-edge quantum computing research. Baidu is aiming to target researchers on both sides of the equation with Paddle Quantum — toolkits that include quantum development resources, optimizers, and quantum chemistry libraries.

Paddle Quantum supports three quantum applications — quantum machine learning, quantum chemical simulation, and quantum combinatorial optimization — and developers can use it to build quantum models from scratch or by following step-by-step instructions. It includes resources addressing challenges like combinatorial optimization problems and quantum chemistry simulations, as well as complex variable definitions and matrix multiplications enabling quantum circuit models and general quantum computing. It also features an implementation of QAOA that translates into a quantum neural network by identifying a model through classical simulation or running directly on a quantum computer.

“Since Baidu announced the establishment of [the] Institute for Quantum Computing in March 2018, one of our primary goals [has been] to build bridges between quantum computing and AI,” Baidu said in a statement. “[Paddle Quantum] … can help scientists and developers quickly build and train quantum neural network models and provide advanced quantum computing applications.”

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Today Baidu also unveiled the latest version of its PaddlePaddle machine learning framework, which over the past few months has gained 39 new algorithms for a total of 146 and more than 200 pretrained models. Among them are Paddle.js, a deep learning JavaScript library that allows developers to embed AI within web browsers or programs in apps like Baidu App and WeChat; Parakeet, a text-to-speech toolkit with cutting-edge algorithms like Baidu’s latest proposed WaveNet model; Paddle Large Scale Classification Tools (PLSC), which enables image classification model training across graphics cards; and EasyData, a new drag-and-drop data service for data collection, labeling, cleaning, and enhancement.

Over 1.9 million developers now use PaddlePaddle, according to Baidu, and 84,000 enterprises have created more than 230,000 models with the framework since its debut — up from 65,000 enterprises and 169,000 models as of last November. (PaddlePaddle, which was originally developed by Baidu scientists for the purpose of applying AI to products internally, was open-sourced in September 2016.) The company anticipates growth will accelerate in light of the recently relaunched PaddlePaddle hardware ecosystem initiative, which will see manufacturers such as Intel, Nvidia, Arm China, Huawei, MediaTek, Cambricon, Inspur, and Graphcore contribute expertise and promote AI app development.

The unveiling of Paddle Quantum follows the release earlier this year of Google’s TensorFlow Quantum, a machine learning framework that can construct quantum data sets, prototype hybrid quantum and classic machine learning models, support quantum circuit simulators, and train discriminative and generative quantum models. Facebook’s PyTorch has its own project for quantum computing in PennyLane, a library for quantum machine learning, automatic differentiation, and optimization of hybrid quantum-classical computations.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

IonQ CEO Peter Chapman on how quantum computing will change the future of AI

May 10, 2020   Big Data
 IonQ CEO Peter Chapman on how quantum computing will change the future of AI

Businesses eager to embrace cutting-edge technology are exploring quantum computing, which depends on qubits to perform computations that would be much more difficult, or simply not feasible, on classical computers. The ultimate goals are quantum advantage, the inflection point when quantum computers begin to solve useful problems, and quantum supremacy, when a quantum computer can solve a problem that classical computers practically cannot. While those are a long way off (if they can even be achieved), the potential is massive. Applications include everything from cryptography and optimization to machine learning and materials science.

As quantum computing startup IonQ has described it, quantum computing is a marathon, not a sprint. We had the pleasure of interviewing IonQ CEO Peter Chapman last month to discuss a variety of topics. Among other questions, we asked Chapman about quantum computing’s future impact on AI and ML.

Strong AI

The conversation quickly turned to Strong AI, or Artificial General Intelligence (AGI), which does not yet exist. Strong AI is the idea that a machine could one day understand or learn any intellectual task that a human being can.

“AI in the Strong AI sense, that I have more of an opinion just because I have more experience in that personally,” Chapman told VentureBeat. “And there was a really interesting paper that just recently came out talking about how to use a quantum computer to infer the meaning of words in NLP. And I do think that those kinds of things for Strong AI look quite promising. It’s actually one of the reasons I joined IonQ. It’s because I think that does have some sort of application.”

VB Transform 2020 Online – July 15-17: Join leading AI executives at the AI event of the year. Register today and save 30% off digital access passes.

In a follow-up email, Chapman expanded on his thoughts. “For decades it was believed that the brain’s computational capacity lay in the neuron as a minimal unit,” he wrote. “Early efforts by many tried to find a solution using artificial neurons linked together in artificial neural networks with very limited success. This approach was fueled by the thought that the brain is an electrical computer, similar to a classical computer.”

“However, since then, I believe we now know, the brain is not an electrical computer, but an electrochemical one,” he added. “Sadly, today’s computers do not have the processing power to be able to simulate the chemical interactions across discrete parts of the neuron, such as the dendrites, the axon, and the synapse. And even with Moore’s law, they won’t next year or even after a million years.”

Chapman then quoted Richard Feynman, who famously said “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.”

“Similarly, it’s likely Strong AI isn’t classical, it’s quantum mechanical as well,” Chapman said.

Machine learning

One of IonQ’s competitors, D-Wave, argues that quantum computing and machine learning are “extremely well matched.” Chapman is still on the fence.

“I haven’t spent enough time to really understand it,” he admitted. “There clearly is a lot of people who think that ML and quantum have an overlap. Certainly, if you think of 85% of all ML produces a decision tree. And the depth of that decision tree could easily be optimized with a quantum computer. Clearly there’s lots of people that think that generation of the decision tree could be optimized with a quantum computer. Honestly, I don’t know if that’s the case or not. I think it’s still a little early for machine learning, but there clearly is so many people that are working on it. It’s hard to imagine it doesn’t have application.”

Again, in an email later, Chapman followed up. “ML has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Generally, Universal Quantum Computers excel at these kinds of problems.”

Chapman listed three improvements in ML that quantum computing will likely allow:

  • The level of optimization achieved will be much higher with a QC as compared to today’s classical computers.
  • The training time might be substantially reduced because a QC can work on the problem in parallel, where classical computers perform the same calculation serially.
  • The amount of permutations that can be considered will likely be much larger because of the speed improvements that QCs bring.

AI is not a focus for IonQ

Strong AI or ML, IonQ isn’t particularly interested either. The company leaves that part to its customers and future partners.

“There’s so much to be to be done in a quantum,” Champan said. “From education at one end all the way to the quantum computer itself. I think some of our competitors have taken on lots of the entire problem set. We at IonQ are just focused on producing the world’s best quantum computer for them. We think that’s a large enough task for a little company like us to handle.”

“So, for the moment we’re kind of happy to let everyone else work on different problems,” he added. “We just think, producing the world’s best quantum computer is a large enough task. We just don’t have extra bandwidth or resources to put into working on machine learning algorithms. And luckily, there’s lots of other companies that think that there’s applications there. We’ll partner with them in the sense that we’ll provide the hardware that their algorithms will run on. But we’re not in the ML business per se.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Folding@home crowdsourced computing project passes 1 million downloads amid coronavirus research

March 31, 2020   Big Data

Folding@home software for donating compute for medical research passed 1 million downloads, director Greg Bowman said in a tweet today. The Folding@home Consortium is made up of 11 laboratories around the world studying the molecular structure of diseases like cancer, ALS, and influenza. Research into COVID-19 started earlier this month. The crowdsourced effort is now powered by hundreds of thousands of Nvidia GPUs and tens of thousands of AMD GPUs.

There are now over 1M devices running @foldingathome ! This includes over 356K @nvidia GPUs, over 79K @AMD GPUs, and over 593K CPUs! Thanks to all our volunteers! We’re planning more blog posts on our #COVID19 work/results this week, please stay tuned.

— Greg Bowman (@drGregBowman) March 30, 2020

According to an Nvidia blog post about the milestone, nearly 400,000 gamers donated GPUs to the effort in recent days.

Last week, Folding@home said it crossed the one exaflop in compute power milestone, or collectively larger than any supercomputer ever assembled. By comparison, the Summit supercomputer at Oak Ridge National Laboratory in Tennessee has repeatedly ranked first in Top500 supercomputing ranks and is able to muster 148 petaflops of compute power.

The compute is being used to simulate “potentially druggable protein targets” and understand how the virus that causes COVID-19 interacts with the ACE2 receptor, according to the Folding@home website.

 Folding@home crowdsourced computing project passes 1 million downloads amid coronavirus research

“While we will rapidly release the simulation datasets for others to use or analyze, we aim to look for alternative conformations and hidden pockets within the most promising drug targets, which can only be seen in simulation and not in static X-ray structures,” organizer John Chodera said in a March 10 blog post to launch a series of protein folding projects into production.

In another recent initiative at the intersection of gaming and medical research, earlier this month the University of Washington introduced FoldIt, a puzzle to solve protein folding challenges. In work at the intersection of protein folding and AI, Google’s DeepMind released predictions of understudied proteins associated with SARS-CoV-2 generated by the latest version of AlphaFold.

A number of open source projects are underway to accelerate progress toward a cure. Cloud computing providers in China like Alibaba, Baidu, and Tencent, as well as AWS, Google Cloud, and Azure in the U.S. are also lending compute to researchers. The CORD-19 data set is made up of tens of thousands of scholarly works and was made available last week for both medical and NLP researchers by a group including Microsoft Research, the Allen AI Institute, and the White House.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Get 2020 vision about edge computing and 5G

December 20, 2019   Big Data
This article is part of the Technology Insight series, made possible with funding from Intel.
________________________________________________________________________________________

With 2020 predictions looming, there’s sure to be a fresh wave of hype around the edge and 5G. Now’s an ideal time to solidify and update your understanding, and explore how they’ll complement each other. If you’re processing payments, taking online orders, detecting fraud, in the financial services industry, or exploring machine learning, these two technologies can help keep you competitive in the coming months.

What and why

Edge computing is all about processing information from devices closer to where it’s being created, rather than shuttling it back and forth from the cloud. Together with 5G, computing at the edge paves the way for applications that wouldn’t have been possible before. Think augmented and virtual reality, where ultra-low latency keeps what you see in sync with what you do, or autonomous vehicles that need to make split-second decisions based on huge volumes of data.

Globally, IDC forecasts 150 billion connected devices (including RFID) by 2025, many of which will pump out data in real time. In 2017, real-time data was a mere 15% of all information created, captured, or replicated. In 2025, it’s expected to reach 30%. As a percentage, that might not sound transformational. But it’s an order of magnitude higher in raw capacity (from ~5 to ~50 zettabytes). Edge computing gives you the power to perform intelligent analysis based on that deluge of real-time data almost instantaneously, all while minimizing your bandwidth expenses.

Definition emerging

Yes, it’s nearly 2020. But the definition of “edge” still often varies depending on who you ask. From NIST to IEEE, models are still evolving. It could be the Raspberry Pi in a refrigerated big rig selectively sending sensor information to the cloud or the node processing game data for Google’s streaming Stadia platform. Although miles of proximity separate those two interpretations, they both put compute resources closer to the user.

Thanks to the collaborative, vendor-neutral 2018 State of the Edge report, there now exists a more explicit definition with some industry consensus behind it:

  1. The edge is a location, not a thing.
  2. There are lots of edges, but the edge we care about is the edge of the last-mile network.
  3. This edge has two sides: an infrastructure edge and a device edge.
  4. Compute will exist on both sides, working in coordination with the centralized cloud.

For clarity, the device edge includes end-points like phones, drones, AR headsets, IoT sensors, and connected cars; gateway devices like switches and routers; and on-premise servers. They’re all on the downstream side of your last-mile cellular or cable network. The infrastructure edge exists on the upstream side. That’s where you find your compute resources collocated with network access equipment and regional datacenters.

 Get 2020 vision about edge computing and 5G

Above: Edge computing exists along a spectrum, from the device edge to the infrastructure edge.

Image Credit: Intel

In our big rig example, the Raspberry Pi is on the device edge. Rather than chewing up bandwidth by continually transmitting environmental data, it processes locally and only phones home in the event of an emergency. Conversely, hyper-local datacenters streaming a 4K gaming experience at 60 frames per second live on the infrastructure edge. Although the device edge offers a narrow latency advantage, you’re obviously going to find much more powerful hardware further upstream.

Beyond the distributed infrastructure edge lies the core network and, ultimately, the cloud, which is more centralized and scalable. But by the time you get to the cloud, latency is much higher (and far less consistent).

Benefits of low-latency edge computing

It’s easy to point at low latency as the killer application of edge computing, particularly as cloud-based software strains under the limitations of physics. Data cannot move any faster than the speed of light, so requests to servers hundreds or thousands of miles away inevitably take tens or hundreds of milliseconds to fulfill. The difference isn’t perceivable as you scroll through your Twitter feed. But those numbers wouldn’t be acceptable to a surgeon operating remotely or a gamer in virtual reality. Above all else, processing at the edge shaves away latency to keep data relevant.

 Get 2020 vision about edge computing and 5G

Above: Today’s Internet is ill-equipped for tomorrow’s applications, which will be bandwidth-hungry and latency-sensitive.

Image Credit: State of the Edge 2020

Edge computing also saves you from shuttling every bit of data back and forth between connected devices and the cloud. If you can determine the value of information close to where it’s created, you can optimize the way it flows. Limiting traffic to just the data that belongs on the cloud cuts down on bandwidth and storage costs, even for applications that aren’t sensitive to latency.

Reliability stands to benefit from edge computing, too. A lot can go wrong between the device edge and centralized cloud. But in rugged environments like offshore platforms, refineries, or solar farms, the device and infrastructure edges can operate semi-autonomously when a connection to the cloud isn’t available.

Distributed architectures can even be a boon to security. Moving less information to the cloud means there’s less information to intercept. And analyzing data at the edge distributes risk geographically. The endpoints themselves aren’t always easy to protect, so firewalling them at the edge helps limit the scope of an attack. Further, keeping data local may be useful for compliance reasons. An edge infrastructure gives you the flexibility to limit access based on geography or copyright limitations.

The edge is pervasive; 5G makes it better

Edge computing is not new. As far back as 2000, content delivery networks were being referred to as edge networks. But it’s universally accepted that as 5G coverage grows, edge computing is going to help address the high-bandwidth, low-latency requirements of modern applications with local, rather than regional, compute resources.

The technology underlying 5G will add speed, reliability, and flexibility to enterprise applications by getting compute resources closer to where data is being created. Information will move between 5G networks efficiently, rather than requiring a trip to the centralized cloud and back. As a result, we’re going to be looking at use cases that previously weren’t possible.

According to the 2020 State of the Edge report, the largest demand for edge computing comes from communication network operators virtualizing their infrastructure and upgrading their networks for 5G. Mobile consumer services running on those networks are going to rely on edge computing to enable streaming game platforms, augmented/virtual reality, and AI.

Smart homes, smart grids, and smart cities all share a proclivity for device edge platforms. As those use cases evolve and become more sophisticated, though, there will be demand for infrastructure edge capabilities, too. 5G’s provisions for ultra-reliable low-latency communications (URLLC) and massive machine-type communication (mMTC) mean the devices and edge can be even closer together, making their short connection more efficient.

 Get 2020 vision about edge computing and 5G

Above: Traffic lights are networked and connected to edge gateway server, where their data can be collected and analyzed. As part of an edge network, they can feed data to map tools and reroute around congestion.

Image Credit: Moor Insights & Strategy

And who could forget about the autonomous automobile, the poster child for edge computing enhanced by 5G? Modern automobiles already utilize compute resources on the device edge for collision avoidance, lane-keeping, and adaptive cruise control. But as assisted and autonomous driving features become more sophisticated, infrastructure edge resources will be required to add intelligence that could only come from the surrounding environment. Good examples: rerouting a trip based on traffic miles ahead, communicating with other autonomous vehicles to accelerate from a stoplight in unison, or making split-second decisions to avoid unsafe situations.

The edge is still young

The empowered edge is one of Gartner’s Top 10 Strategic Technology Trends for 2020. However, several other concepts on its watchlist have roots in edge computing as well. Hyperautomation, which deals with the application of artificial intelligence and machine learning to augment human input, is going to rely on a foundation of low latency and unflagging reliability. The multiexperience is another example, incorporating multisensory and multitouchpoint interfaces dependent on high bandwidth and real-time processing. Of course, autonomous things are all about AI, 5G, and edge computing.

 Get 2020 vision about edge computing and 5G

Above: The global annual CAPEX for edge IT and data center facilities is forecast to reach $ 146 billion in 2028 with a 35 percent CAGR.

Image Credit: State of the Edge 2020

Enabling those novel use cases will require substantial investment. A forecast model by Tolaga Research predicts a cumulative CapEx spend of $ 700 billion between now and 2028 on edge IT and datacenter infrastructure. As the computing pendulum swings from the centralized cloud to a distributed edge, opportunities abound, particularly in the maturing infrastructure edge. Understanding the impact of edge computing and 5G will allow you to provide seamless customer experiences, test new markets, and act on insights in real-time.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AWS Braket lets customers experiment with a range of quantum computing hardware

December 3, 2019   Big Data
 AWS Braket lets customers experiment with a range of quantum computing hardware

Announcements from Amazon’s re:Invent 2019 conference in Las Vegas this week are coming in at a steady clip, most recently several relating to the tech giant’s ongoing quantum computing efforts. This afternoon, Amazon Web Services (AWS) — Amazon’s cloud computing division — detailed three key initiatives as part its plans to advance quantum computing tech: a fully managed service called Amazon Braket, a new academic partnership with the California Institute of Technology (Caltech), and a Quantum Solutions Lab.

“With quantum engineering starting to make more meaningful progress, customers are asking for ways to experiment with quantum computers and explore the technology’s potential,” said AWS senior vice president Charlie Bell. “We believe that quantum computing will be a cloud-first technology and that the cloud will be the main way customers access the hardware.”

AWS Braket

AWS Braket, which launches today in preview, is a fully managed service that provides customers a development environment to build quantum algorithms, test them on simulated quantum computers, and try them on a range of different quantum hardware architectures. Using Jupyter notebooks and existing AWS services, users can assess both present and future capabilities including quantum annealing from D-Wave, ion trap devices from IonQ, and superconducting chips from Rigetti. More will arrive in the coming months.

Amazon says partners were chosen “for their quantum technologies,” and that both customers (like Boeing) and hardware providers can design quantum algorithms using the Braket developer toolkit. They’re afforded the choice of executing either low-level quantum circuits or fully managed hybrid algorithms, and of selecting between software simulators and quantum hardware.

AWS Center for Quantum Computing

Alongside Braket, Amazon announced the AWS Center for Quantum Computing, a new laboratory to be established at Caltech with the goal of “boosting innovation in science and industry.” According to the company, it will bring together Amazon researchers and engineers with academic institutions in quantum computing to develop more powerful quantum computing hardware and identify novel quantum applications.

“We are delighted to join with our colleagues at AWS and our academic partners to address the fundamental challenges that must be overcome if quantum computing is to reach its full potential,” said provostial chair and professor of chemistry and chemical engineering David Tirrell. “Caltech has made substantial investments in both experimental and theoretical quantum science and technology over the years, and the new Center will provide an extraordinary opportunity to maximize the impact of those investments.”

AWS Quantum Solutions Lab

Lastly, Amazon debuted the Quantum Solutions Lab, which aims to connect AWS customers with quantum computing experts — including those from 1Qbit, Rahko, Rigetti, QC Ware, QSimulate, Xanadu, and Zapata — to identify ways to apply quantum computing inside their organizations. Amazon says that it and its partners will work with customers on experiments and guide them to incorporate quantum solutions into their business, in part through lab programs that combine hands-on educational workshops with brainstorming sessions.

Amazon’s trio of announcements come after Google made available to cloud customers its Bristlecone quantum processor, and after IBM began giving enterprise customers and research institutions remote access to its quantum machine. In November, another competitor — Microsoft — took the wraps off of Azure Quantum, a service that offers select partners access to three prototype quantum computers from IonQ, Honeywell, and QCI.

The quantum computing market could prove to be a lucrative new source of revenue for AWS. According to some analysts, it’ll reach nearly $ 1 billion by 2025, up from $ 89.35 million in 2016.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

ASU 2018-15 Simplifies the Process for Accounting for Cloud Computing Expenses

October 10, 2019   NetSuite

Posted by Christopher Miller, Distinguished Solution Specialist

New FASB standard offers guidance on accounting for cloud computing license costs and implementations.

I am frequently contacted by customers and our internal team about how to account for NetSuite costs. As cloud computing became more popular, businesses took different approaches to how they accounted for it on their financial statements. A few of the popular ideas were:

  • Treat a cloud computing arrangement as a fixed asset and depreciate it. Capitalize everything.
  • Present the contract as prepaid and amortize it as an intangible asset.
  • Amortize the contract as an expense and capitalize all of the implementation costs.
  • Reference the contract to lease accounting standards and follow those as an analogy for cloud computing.
  • Treat the cloud computing arrangement as internally developed software.

Each of these ideas had some merit. However, as the percentage of businesses using cloud products increased, the FASB and other standard setters realized that the “diversity” in practice was growing too large.

So, in 2015, the FASB issued ASU No. 2015-05, “Customer’s Accounting for Fees Paid in a Cloud Computing Arrangement,” to simplify to process. The goal was to help a business determine what kind of contract their arrangement included.

 The key determination was to help a user know if their arrangement contained a software license or not. Using the guidance under ASC 985-605, customers are required to determine if their cloud arrangement contains a software license. If the answer is yes, then treat the contract as an internal use software intangible under ASC 350-40. If no, then account for the arrangement as a service contract.

Screen%20Shot%202019 10 09%20at%2010.06.26%20AM ASU 2018 15 Simplifies the Process for Accounting for Cloud Computing Expenses

The release of No. 2015-05 clarified several of the key questions presented above. If the cloud computing arrangement included a software license, then the contract was treated as internal-use software. This allowed the purchaser to capitalize the license and related implementation costs. It also clarified that when an arrangement didn’t have a software license embedded, that it was a service contract and should be treated as an operating expense. This clarification put an end to treating service contracts as fixed assets, etc.

However, ASU 2015-5 didn’t determine the process for handling the implementation costs of a service contract. Many of the same questions existed, such as whether you can capitalize implementation costs, or must they all be expensed?

To clarify the process of handling service contract implementations, FASB issued ASU No. 2018-14, effective for all filers after Dec 15, 2019.

Key Provision 1 – Apply Internal Use Software Guidance

When entering into a cloud computing contract that is a service contract, entities will now apply the same guidance toward implementation costs that are used for internal use software. Here are the key guidelines to be aware of when applying this new guidance:

  • Project costs to purchase, develop or install the software generally can be capitalized.
  • Activities related to testing, customization, configuration and scripting can be capitalized.
  • Training and data conversion costs are to be expensed as incurred.
  • These costs can be both internal and external, such as payroll, contractors and travel expenses.

Key considerations here are to consider how you define the different terms and the quality of your project timekeeping/management to identify the type of work performed.

Amortization of Implementation Costs

ASU 2018-15 states that implementation costs should be amortized over the term of the associated cloud computing arrangement service on a straight-line basis. In addition, it states that the usage rate (number of transactions, users, data throughput) should not be used as a basis for amortization.

In section 350-40-35-14 the guidance states additional periods to be included in the amortization period:

  • An entity (customer) shall determine the term of the hosting arrangement that is a service contract as the fixed noncancellable term of the hosting arrangement plus all of the following:
  • Periods covered by an option to extend the hosting arrangement if the entity (customer) is reasonably certain to exercise that option
  • Periods covered by an option to terminate the hosting arrangement if the entity (customer) is reasonably certain not to exercise that option
  • Periods covered by an option to extend (or not to terminate) the hosting arrangement in which exercise of the option is controlled by the vendor.
  • When reassessing the amortization term, an entity shall consider the following to reassess the term:
  • Obsolescence
  • Technology
  • Competition
  • Other economic factors
  • Rapid changes that may be occurring in the development of hosting arrangements or hosted software

Presentation

The new ASU makes a few critical changes to the presentation of cloud service contracts. A few highlights include:

  • 350-40-45-2 An entity shall present the capitalized implementation costs described in paragraph 350-40-25-18 in the same line item in the statement of financial position that a prepayment of the fees for the associated hosting arrangement would be presented.
  • 350-40-45-1 An entity shall present the amortization of implementation costs described in paragraph 350-40-35-13 in the same line item in the statement of income as the expense for fees for the associated hosting arrangement.
  • 350-40-45-3 An entity shall classify the cash flows from capitalized implementation costs described in paragraph 350-40-25-18 in the same manner as the cash flows for the fees for the associated hosting arrangement.

The key takeaway for presentation in the new guidance is that no costs associated with a CCA Service Contract should be treated as depreciation or amortization and should all be included in operating income.

Disclosure Requirements

There are a few key disclosures required from the new ASU. Specifically they are:

  • The nature of its arrangements for cloud computing.
  • Any amortization expenses for the period
  • Major classes of implementation costs that have been capitalized
  • Accumulated amortization of the implementation costs

Conclusion

With the release of ASU 2018-15, the way to manage cloud computing contracts has clarity. It is now a three-step process that consists of determining whether an arrangement has a software license included. Manage the implementation project and capitalize the correct costs. Lastly present the costs correctly based on the new ASU.

Posted on Wed, October 9, 2019
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

Read More

Nvidia will support Arm hardware for high-performance computing

June 17, 2019   Big Data

At the International Supercomputing Conference (ISC) in Frankfurt, Germany this week, Santa Clara-based chipmaker Nvidia announced that it will support processors architected by British semiconductor design company Arm. Nvidia anticipates that the partnership will pave the way for supercomputers capable of “exascale” performance — in other words, capable of completing at least a quintillion floating point computations (“flops”) per second, where a flop equals two 15-digit numbers multiplied together.

Nvidia says that by 2020 it will contribute its full stack of AI and high-performance computing (HPC) software to the Arm ecosystem, which by Nvidia’s estimation now accelerates over 600 HPC applications and machine learning frameworks. Among other resources and services, it will make available CUDA-X AI and HPC libraries, graphics-accelerated frameworks, software development kits, PGI compilers with OpenACC support, and profilers.

Nvidia founder and CEO Jensen Huang pointed out in a statement that, thanks to this commitment, Nvidia will soon accelerate all major processor architectures: x86, IBM’s Power, and Arm.

“As traditional compute scaling has ended, the world’s supercomputers have become power constrained,” said Huang. “Our support for Arm, which designs the world’s most energy-efficient CPU architecture, is a giant step forward that builds on initiatives Nvidia is driving to provide the HPC industry a more power-efficient future.”

This is hardly Nvidia’s first collaboration with Arm. The former’s AGX platform incorporates Arm-based chips, and its Deep Learning Accelerator (NVDLA) — a modular, scalable architecture based on Nvidia’s Xavier system-on-chip — integrates with Arm’s Project Trillium, a platform that aims to bring deep learning inferencing to a broader set of mobile and internet of things (IoT) devices.

If anything, today’s news highlights Nvidia’s concerted push into an HPC market that’s forecast to be worth $ 59.65 billion by 2025. To this end, the chipmaker recently worked with InfiniBand and ethernet interconnect supplier Mellanox to optimize processing across supercomputing clusters, and it continues to invest heavily in 3D packaging techniques and interconnect technology (like NVSwitch) that allow for dense scale-up nodes.

“We have been a pioneer in using Nivida [graphics cards] on large-scale supercomputers for the last decade, including Japan’s most powerful ABCI supercomputer,” said Satoshi Matsuoka, director at Riken, a large scientific research institute in Japan. “At Riken R-CCS [Riken Center for Computational Science], we are currently developing the next-generation, Arm-based exascale Fugaku supercomputer and are thrilled to hear that Nvidia’s GPU acceleration platform will soon be available for Arm-based systems.”

Nvidia has notched a few wins already. Last fall, the TOP500 ranking of supercomputer performance (based on LINPACK score) showed a 48% jump year-over-year in the number of systems using the company’s GPU accelerators, with the total number climbing to 127, or 3 times greater than five years prior. Two of the world’s fastest supercomputers made the list — the U.S. Department of Energy’s Summit at Oak Ridge National Laboratory and Sierra at Lawrence Livermore National Lab — and others featured Nvidia’s DGX-2 Pod, which combines 36 DGX-2 systems and delivers more than 3 petaflops of double-precision performance.

DGX-2 was announced in March 2018 at Nvidia’s GPU Technology Conference in Santa Clara and boasts 300 processors capable of delivering two petaflops of computational power while occupying only 15 racks of datacenter space. It complements HGX-2, a cloud server platform equipped with 16 Tesla V100 graphics processing units that collectively provide half a terabyte of memory and two petaflops of compute power.

DGX SuperPod

Alongside the partnership announcement this moring, Nvidia revealed what it claims is the world’s 22nd-fastest supercomputer: the DGX SuperPod. VP of AI infrastructure Clement Farabet says it’ll accelerate the company’s autonomous vehicle development.

 Nvidia will support Arm hardware for high performance computing

Above: The Nvidia DGX SuperPod.

Image Credit: Nvidia

“AI leadership demands leadership in compute infrastructure,” said Farabet. “Few AI challenges are as demanding as training autonomous vehicles, which requires retraining neural networks tens of thousands of times to meet extreme accuracy needs. There’s no substitute for massive processing capability like that of the SuperPod.”

The SuperPod contains 96 DGX-2H units and 1,536 V100 Tensor Core graphics chips in total, interconnected with Mellanox and Nvidia’s NVSwitch technologies. It’s about 400 times smaller than comparable top-ranked supercomputing systems and it takes as little as three weeks to assemble while delivering 9.4 petaflops of computing performance. In real-world tests, it managed to train the benchmark AI model ResNet-50 in less than two minutes.

Customers can buy the SuperPod in whole or in part from any of Nvidia’s DGX-2 partners.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • P3 Jobs: Time to Come Home?
    • NOW, THIS IS WHAT I CALL AVANTE-GARDE!
    • Why the open banking movement is gaining momentum (VB Live)
    • OUR MAGNIFICENT UNIVERSE
    • What to Avoid When Creating an Intranet
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited