• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Create

Create and Update Contacts (Child) using an Embedded Power Apps Sub-grid

April 6, 2021   Microsoft Dynamics CRM

One of the features that we are quite excited about in Customer Engagement is the embedding of the Canvas Power Apps in Dynamics 365. Many say this was the replacement of the Dialogs, but it does more than that. With the popular demand of working with Power Apps it would be great to have the ability to create/update the records via the same. The scenario we would be seeing is to fetch all the…

Source

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

Cere Network raises $5 million to create decentralized data cloud platform

March 29, 2021   Big Data

From TikTok to Instagram, how’s your creative working for you?

In digital marketing, there’s no one-size-fits-all. Learn how data can make or break the performance of creative across all platforms.

Register Now


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Cere Network has raised $ 5 million for its decentralized data cloud (DDC) platform, which is launching today for developers. The company’s ambition is to take on data cloud leader Snowflake.

The investment was led by Republic Labs, the investment arm of crowdsourced funding platform Republic. Other investors include Woodstock Fund, JRR Capital, Ledger Prime, G1 Ventures, ZB exchange, and Gate.io exchange. Cere Network previously raised $ 5 million from Binance Labs and Arrington XRP Capital, amongst others, bringing its total raised to $ 10 million.

“Enterprises using Snowflake are still constrained by bureaucratic data acquisition processes, complex and insufficient cloud security practices, and poor AI/ML governance,” Cere Network CEO Fred Jin said in an email to VentureBeat. “Cere’s technology allows more data agility and data interoperability across different datasets and partners, which extracts more value from the data faster compared to traditional compartmentalized setup.”

The Cere DDC platform launches to developers today, which allows thousands of data queries to be hosted on the blockchain, the transparent and secure digital ledger.

The platform offers a more secure first-party data foundation in the cloud by using blockchain identity and data encryption to onboard and segment individual consumer data. This data is then automated into highly customizable and interoperable virtual datasets, directly accessible in near real time by all business units, partners/vendors, and machine-learning processes.

Above: Cere Network’s data cloud query.

Image Credit: Cere Network

The Cere token will be used to power its decentralized data cloud and fuel Cere’s open data marketplace that allows for trustless data-sharing among businesses and external data specialists, as well as staking and governance. The public sale of the Cere token will be held on Republic, the first token sale on the platform.

“We’ve been following Cere Network for some time and have been impressed with the team and the market fit – and need – for a decentralized data cloud,” said Boris Revsin, managing director of Republic Labs, in a statement. “We’re very excited to host Cere Network’s token sale on Republic, which will ensure a decentralized network and faster adoption in the enterprise space of blockchain technology. Their DDC improves upon Snowflake using blockchain identity and data encryption to onboard and segment individual consumer data.”

Developers can access the Cere DDC here. The public sale for Cere token is scheduled for March 31 on Republic. The company said it is working with a number of Fortune 1,000 customers.

“There’s a huge amount of opportunities in this rapidly shifting space for the coming years. We don’t plan to take on the likes of Snowflake head on, yet, but rather focus on specific solutions and verticals where we can bring more customization and efficiency. We are ok with chipping away at their lead while doing this,” Jin said. “We are bringing an open data marketplace which will open up data access beyond the limitation of traditional silo’d data ecosystems, which include Snowflake, and the likes of Salesforce.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

How to functionally create N*n lists of ordered pairs using Outer (or similar) where n is dynamic / recursive?

February 22, 2021   BI News and Info

 How to functionally create N*n lists of ordered pairs using Outer (or similar) where n is dynamic / recursive?

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

Hello, I want to create a matrix with a certain number of rows and columns

December 6, 2020   BI News and Info

 Hello, I want to create a matrix with a certain number of rows and columns

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

How to Create Microservices-based Applications for AWS

October 28, 2020   TIBCO Spotfire
TIBCO Microservices 696x392 How to Create Microservices based Applications for AWS

Reading Time: 2 minutes

Market demands are shifting rapidly, with many disruptive forces in motion. Businesses are reacting in a number of different ways to preserve cash, change the way they operate, and accelerate digital business initiatives to capture new value. Today’s disruptions are planting seeds for broad and permanent changes across all markets, so businesses need to act now in order to prepare for what’s to come in the near future. In order to combat these forces, a business needs to be agile so that it can rapidly adapt its operations as well as its products and services to meet the new market conditions. Either way, the business that is able to react quickly maintains resiliency and has a foundation for rapid growth and innovation

A key starting point for increasing business agility is the digital platform, as businesses are operating more with digital services than manual, rigid, paper-based processes. If you aren’t able to rapidly adapt the services and capabilities of your digital platform to stay aligned with the needs of the business, then your underlying application architecture needs to be evolved so that it becomes more agile. One way to build this agility is by evolving to a microservices architecture.

Microservices are very small units of executable code. The industry has long preached the benefits of breaking down large, monolithic applications into smaller units of execution. But technology has evolved in recent years so that now this strategy creates high performing apps. Microservices can be used to break up monoliths into individual, highly cohesive business services that are deployed in containers and serverless environments.  Thus, microservices can each be adapted, deployed, and scaled independently of other microservices. This gives the business a high degree of flexibility to adapt to the digital platform very quickly.   

TIBCO Cloud Integration makes it easy to develop and deploy your business logic in event-driven microservices and functions to AWS.  You can use pre-packaged connectors for AWS to connect to a wide variety of Amazon services to create application logic. The entire application architecture is highly efficient and cost-effective which will accelerate your adoption of AWS technologies.

TIBCO Cloud Integration simplifies the development and deployment of event-driven applications built with microservices and functions to AWS. Once apps are created, you can package your microservices into a Docker Image, and then deploy them into the AWS container management service of your choice including Amazon EKS, ECS, and Fargate for deployment to AWS, or other container management services. They also can be deployed seamlessly to AWS Lambda.

TIBCO’s extensive experience in intelligent connectivity, combined with  AWS’s highly flexible and scalability cloud platform makes for a natural partnership.  TIBCO is an AWS Advanced Technology Partner. We partner with AWS in both technology and business development initiatives.  We have many solutions that run natively on AWS, and that are also available for purchase through the AWS marketplace, not only for connectivity, but also for analytics and machine learning, and data management.

Microservices can each be adapted, deployed, and scaled independently of other microservices. This gives the business a high degree of flexibility to adapt to the digital platform very quickly.   Click To Tweet

To learn more about how to create microservices-based applications for AWS, watch this webinar hosted by BrightTalk. And to learn more about TIBCO Cloud Integration, watch our demos or sign up for a 30-day free trial.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Trying to create a list that counts the number primes for each remainder class

September 27, 2020   BI News and Info

 Trying to create a list that counts the number primes for each remainder class

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

Researchers create dataset to advance U.S. Supreme Court gender bias analysis

September 22, 2020   Big Data
 Researchers create dataset to advance U.S. Supreme Court gender bias analysis

Automation and Jobs

Read our latest special issue.

Open Now

University of Washington language researchers and legal professionals recently created a labeled dataset for detection of interruptions and competitive turn-taking in U.S. Supreme Court oral arguments. They then used the corpus of “turn changes” to train AI models to experiment with ways to automatically classify turn changes as competitive or cooperative as a way to analyze gender bias.

“In-depth studies of gender bias and inequality are critical to the oversight of an institution as influential as the Supreme Court,” reads the paper University of Washington researchers Haley Lepp and Gina-Anne Levow published on preprint repository arXiv one week ago. “We find that as the first person in an exchange, female speakers and attorneys are spoken to more competitively than are male speakers and justices. We also find that female speakers and attorneys speak more cooperatively as the second person in an exchange than do male speakers and justices.”

Attorneys who speak before the Supreme Court are allotted 30 minutes of oral argument and are expected to stop talking when a justice speaks. Linguists have observed men interrupting women routinely in professional environments and other settings.

Turn changes are defined as instances when one person stops speaking and another person starts speaking. Short audio clips of each turn change were annotated as competitive or cooperative by 77 members of the U.S. legal community who identify as an attorney, judge, legal scholar, or law student in their second year or higher. Lepp and Levow’s work focuses on measuring whether the turn change was cooperative or competitive, based on oral argument audio the Supreme Court made available, in part because previous work by Deborah Tannen found that interruptions in speech can be part of regular discourse and that the context of the conversation can be a factor.

The paper devoted to gender bias analysis was published days before the death of Supreme Court Justice Ruth Bader Ginsburg at the age of 87. Ginsburg was the second woman ever appointed to the U.S. Supreme Court. As a litigator for the American Civil Liberties Union (ACLU), Ginsburg successfully argued cases before the Supreme Court that greatly extended women’s rights in the United States. On Wednesday and Thursday, she will be the first woman and the first Jewish person in U.S. history to lie in state at the U.S. Capital building for members of the public to say goodbye. She was the longest-serving female justice in U.S. history.

Although voting has already begun in some parts of the country and Ginsburg pleaded in her final days to let the winner of the presidential election fill her vacancy, President Trump is expected to nominate a pick to fill her seat Friday or Saturday. Two Republican Senators pledged not to vote until the presidential election is decided, but Senate Majority Leader Mitch McConnell said just hours after her death that the president’s nominee will get a vote.

Details of the turn changes corpus dataset follow a 2017 study that used automation to identify the number of interruptions that occurred from 2004-2015. The study “Justice, Interrupted: The Effect of Gender, Ideology and Seniority at Supreme Court Oral Arguments” by Tonja Jacobi and Dylan Schweers found that women are interrupted three times as often as male Supreme Court justices are. Female Supreme Court justices were interrupted by attorneys as well as other Supreme Court justices, led by Anthony Kennedy, Antonin Scalia, and William Rehnquist. Scalia and Stephen Breyer also interrupted each other a lot.

A producer of the podcast More Perfect noticed people repeatedly interrupting Ginsburg, which led to an episode on the subject. Jacobi spoke on the podcast and said Ginsburg developed tactics to adapt to frequent interruptions, first by asking to ask a question, then pivoting to ask questions more like male justices who interrupt.

The episode also highlighted that Justice Sonia Sotomayor was found to speak as often as men in the Jacobi study, but has still drawn criticism from media commentators at times for being aggressive. Gender is pervasive in coverage of Supreme Courts, according to a 2016 analysis of media coverage in five democratic countries. The analysis found that generally women who ask questions like male justices are labeled abrasive, militant, or mean by critics.

Last year, the U.S. Supreme Court introduced a rule that justices will try to give attorneys two minutes to speak without interruption at the start of oral arguments.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Intel researchers create AI system that rates similarity of 2 pieces of code

July 29, 2020   Big Data
 Intel researchers create AI system that rates similarity of 2 pieces of code

VB Transform

Watch every session from the AI event of the year

On-Demand

Watch Now

In partnership with researchers at MIT and the Georgia Institute of Technology, Intel scientists say they’ve developed an automated engine — Machine Inferred Code Similarity (MISIM) — that can determine when two pieces of code perform similar tasks, even when they use different structures and algorithms. MISIM ostensibly outperforms current state-of-the-art systems by up to 40 times, showing promise for applications from code recommendation to automated bug fixing.

With the rise of heterogeneous computing — i.e., systems that use more than one kind of processor — software platforms are becoming increasingly complex. Machine programming (a term coined by Intel Labs and MIT) aims to tackle this with automated, AI-driven tools. A key technology is code similarity, or systems that attempt to determine whether two code snippets show similar characteristics or achieve similar goals. Yet building accurate code similarity systems is a relatively unsolved problem.

MISIM works because of its novel context-aware semantic structure (CASS), which susses out the purpose of a given bit of source code using AI and machine learning algorithms. Once the structure of the code is integrated with CASS, algorithms assign similarity scores based on the jobs the code is designed to perform. If two pieces of code look different but perform the same function, the models rate them as similar — and vice versa.

CASS can be configured to a specific context, enabling it to capture information that describes the code at a higher level. And it can rate code without using a compiler, a program that translates human-readable source code into computer-executable machine code. This confers the usability advantage of allowing developers to execute on incomplete snippets of code, according to Intel.

Intel says it’s expanding MISIM’s feature set and moving it from the research to the demonstration phase, with the goal of creating a code recommendation engine to assist internal and external researchers programming across its architectures. The proposed system would be able to recognize the intent behind an algorithm and offer candidate codes that are semantically similar but with improved performance.

That could save employers a few headaches — not to mention helping developers themselves. According to a study published by the University of Cambridge’s Judge Business School, programmers spend 50.1% of their work time not programming and half of their programming time debugging. And the total estimated cost of debugging is $ 312 billion per year. AI-powered code suggestion and review tools like MISIM promise to cut development costs substantially while enabling coders to focus on more creative, less repetitive tasks.

“If we’re successful with machine programming, one of the end goals is to enable the global population to be able to create software,” Justin Gottschlich, Intel Labs principal scientist and director of machine programming research, told VentureBeat in a previous interview. “One of the key things you want to do is enable people to simply specify the intention of what they’re trying to express or trying to construct. Once the intention is understood, with machine programming, the machine will handle the creation of the software — the actual programming.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

RetrieveGAN AI tool combines scene fragments to create new images

July 22, 2020   Big Data

VB Transform

Watch every session from the AI event of the year

On-Demand

Watch Now

Researchers at Google, the University of California, Merced, and Yonsei University developed an AI system — RetrieveGAN — that takes scene descriptions and learns to select compatible patches from other images to create entirely new images. They claim it could be beneficial for certain kinds of media and image editing, particularly in domains where artists combine two or more images to capture each’s most appealing elements.

AI and machine learning hold incredible promise for image editing, if emerging research is any indication. Engineers at Nvidia recently demoed a system — GauGAN — that creates convincingly lifelike landscape photos from whole cloth. Microsoft scientists proposed a framework capable of producing images and storyboards from natural language captions. And last June, the MIT-IBM Watson AI Lab launched a tool — GAN Paint Studio — that lets users upload images and edit the appearance of pictured buildings, flora, and fixtures.

By contrast, RetrieveGAN captures the relationships among objects in existing images and leverages this to create synthetic (but convincing) scenescapes. Given a scene graph description — a description of objects in a scene and their relationships — it encodes the graph in a computationally-friendly way, looks for aesthetically similar patches from other images, and grafts one or more of the patches onto the original image.

 RetrieveGAN AI tool combines scene fragments to create new images

The researchers trained and evaluated RetreiveGAN on images from the open source COC-Stuff and Visual Genome data sets. In experiments, they found that it was “significantly” better at isolating and extracting objects from scenes on at least one benchmark compared with several baseline systems. In a subsequent user study where volunteers were given two sets of patches selected by RetrieveGAN and other models and asked the question “Which set of patches are more mutually compatible and more likely to coexist in the same image?,” the researchers report that RetrieveGAN’s patches came out on top the majority of the time.

“In this work, we present a differentiable retrieval module to aid the image synthesis from the scene description. Through the iterative process, the retrieval module selects mutually compatible patches as reference for the generation. Moreover, the differentiable property enables the module to learn a better embedding function jointly with the image generation process,” the researchers wrote. “The proposed approach points out a new research direction in the content creation field. As the retrieval module is differentiable, it can be trained with the generation or manipulation models to learn to select real reference patches that improves the quality.”

Although the researchers don’t mention it, there’s a real possibility their tool could be used to create deepfakes, or synthetic media in which a person in an existing imag is replaced with someone else’s likeness. Fortunately, a number of companies have published corpora in the hopes the research community will pioneer detection methods. Facebook — along with Amazon Web Services (AWS), the Partnership on AI, and academics from a number of universities — is spearheading the Deepfake Detection Challenge. In September 2019, Google released a collection of visual deepfakes as part of the FaceForensics benchmark, which was cocreated by the Technical University of Munich and the University Federico II of Naples. More recently, researchers from SenseTime partnered with Nanyang Technological University in Singapore to design DeeperForensics-1.0, a data set for face forgery detection that they claim is the largest of its kind.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers detail texture-swapping AI that could be used to create deepfakes

July 8, 2020   Big Data

In a preprint paper published on Arxiv.org, researchers at the University of California, Berkeley and Adobe Research describe the Swapping Autoencoder, a machine learning model designed specifically for image manipulation. They claim it can modify any image in a variety ways, including texture swapping, while remaining “substantially” more efficient compared with previous generative models.

The researchers acknowledge that their work could be used to create deepfakes, or synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. In a human perceptual study, subjects were fooled 31% of the time by images created using the Swapping Autoencoder. But they also say that proposed detectors can successfully spot images manipulated by the tool at least 73.9% of the time, suggesting the Swapping Autoencoder is no more harmful than other AI-powered image manipulation tools.

“We show that our method based on an auto-encoder model has a number of advantages over prior work, in that it can accurately embed high-resolution images in real-time, into an embedding space that disentangles texture from structure, and generates realistic output images … Each code in the representation can be independently modified such that the resulting image both looks realistic and reflects the unmodified codes,” the coauthors of the study wrote.

The researchers’ approach isn’t novel in the sense that many AI models can edit portions of images to create new images. For example, the MIT-IBM Watson AI Lab released a tool that lets users upload photographs and customize the appearance of pictured buildings, flora, and fixtures, and Nvidia’s GauGAN can create lifelike landscape images that never existed. But these models tend to be challenging to design and computationally intensive to run.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

 Researchers detail texture swapping AI that could be used to create deepfakes

By contrast, the Swapping Autoencoder is lightweight, using image swapping as a “pretext” task for learning an embedding space useful for image manipulation. It encodes a given image into two separate latent codes — a “structure” code and a “texture” code — intended to represent structure and texture, and during training, the structure code learns to correspond to the layout or structure of a scene while the texture codes capture properties about the scene’s overall appearance.

In an experiment, the researchers trained Swapping Autoencoder on a data set containing images of churches, animal faces, bedrooms, people, mountain ranges, and waterfalls and built a web app that offers fine-grained control over uploaded photos. The app supports global style editing and region editing as well as cloning, with a brush tool that replaces the structure code from another part of the image.

“Tools for creative expression are an important part of human culture … Learning-based content creation tools such as our method can be used to democratize content creation, allowing novice users to synthesize compelling images,” the coauthors wrote.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited