• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Weekly

Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

December 30, 2020   CRM News and Info

xSet Targets and Track Dynamics 365 CRM 625x357.jpg.pagespeed.ic.VVa2gk1t4K Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

Looking for a complete Dynamics 365 CRM monitoring solution? Get User Adoption Monitor for seamless tracking of Dynamics 365 CRM user actions!

Recently, our user actions monitoring app – User Adoption Monitor, a Preferred App on Microsoft AppSource – had released three new features making it a formidable app for tracking user actions across Dynamics 365 CRM/Power Apps. In our previous blog, you were given in-depth information about one of the newly released User Adoption Monitor feature – Data Completeness– which helps you to ensure that all Dynamics 365 CRM records have the necessary information required to conduct smooth business transactions. And now in this blog, we will shed more light on its remaining two new features – Aggregate Tracking & Target Tracking.

So, let’s see how these two new features will help you in tracking Dynamics 365 CRM/Power Apps user actions.

Aggregate Tracking

With this feature, you will be able to track the aggregations of respective numeric fields of the entity on which the specific user action has been defined. For example, as a Sales Manager you want to track how much sales your team has made for a specific period. To know this, all you have to do is configure aggregate tracking for the ‘Opportunity-win’ action. Once the tracking is done, you will get the SUM of the Actual Revenue of all the Opportunities won by each of your team members for a defined period of time. Similarly, you can get the aggregate value (SUM or AVG) of Budget Amount, Est. Revenue, Freight Amount, etc.

ximage001 2 625x277.png.pagespeed.ic.Crva0sLuKh Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

Target Tracking

Using this feature, you will be able to allot sales targets to your team members and keep regular track of it in Dynamics 365 CRM/Power Apps. You can set targets in both count & value. Consider a scenario where you want to appraise the performance of your sales team. With this feature, you can set a target for each of your team member,s and based on the tracking result will be able to analyze their performance easily. You can now track the performance of your team members using this feature in the following ways:

Target based on Aggregation Value

If you want to track the total sales value generated by your team members then you can use this feature and set the target in aggregate value. Once set, you can now monitor the performance of your team members by comparing the aggregate value of the target set and the total aggregate value of the target achieved by them on a daily, weekly, or monthly basis.

ximage002 625x242.png.pagespeed.ic.UID43CELeX Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

Target based on Count

Similarly, if you want to keep tab on the count of sales made by your team members then you can set the target in the count. Once set, you can now monitor and compare the target set (in count) and the total target achieved (in count) by your team members on a daily, weekly, or monthly basis.

ximage003 1 625x240.png.pagespeed.ic.nwaW0SYIMm Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

Target defined for Fixed Interval

In the above two scenarios, the Targets were defined with the Interval set as ‘Recurring’. In such cases, tracking will be done on a daily, weekly, or monthly basis. Other than that, you can also define the Target for a fixed period of time by setting the Interval as Fixed.

After you set the interval of Target Configuration as fixed, you can define the Start Date and End Date for which you want to set the target tracking.

With this, you can easily monitor and compare the target set and the total target achieved (both in count or aggregate value) by your team members for a given fixed period of time.

ximage004 625x206.png.pagespeed.ic.gu7M568O5S Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

ximage005 1 625x264.png.pagespeed.ic.t3eqjfnJyU Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

A handy app to have at the time for appraisals, isn’t it?

So, wait no more! Satisfy your curiosity by downloading User Adoption Monitor from our website or Microsoft AppSource and exploring these amazing features for a free trial of 15 days.

For a personal demo or any user actions monitoring queries, feel free to contact us at crm@inogic.com

Until next time –Wish you a Safe and Glorious New Year!

Keep Tracking Us!

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

AI Weekly: NeurIPS 2020 and the hope for change

December 12, 2020   Big Data
 AI Weekly: NeurIPS 2020 and the hope for change

The cutting-edge computer architecture that’s changing the AI game

Learn about the next-gen architecture needed to unlock the true capabilities of AI and machine learning.

Register here

On Monday morning, organizers of NeurIPS, the largest annual gathering of AI researchers in the world, gave Best Paper awards to the authors of three pieces of research, including one detailing OpenAI’s GPT-3 language model. The week also started with AI researchers refusing to review Google AI papers until grievances were resolved after firing Ethical AI team co-lead Timnit Gebru. Googlers describe it as an instance of “unprecedented research censorship,” raising questions of corporate influence. According to one analysis, Google publishes more AI research than any other company or institution.

Tension between corporate interests, human rights, ethics, and power could be seen at workshops throughout the week. On Tuesday at the Muslim in AI workshop, GPT-3’s anti-Muslim bias was explored, as was the ways in which AI and IoT devices are used to control and surveil Muslims in China. The Washington Post reported this week that Huawei is reportedly working on AI with a “Uighur alarm” for authorities to track members of the Muslim minority group. Huawei is a platinum sponsor of NeurIPS. When asked what about Huawei and how NeurIPS makes ethical considerations about sponsors, a NeurIPS spokesperson told VentureBeat Friday that a new sponsorship committee is being formed to evaluate sponsor criteria and “determine policies for vetting and accepting sponsors.”

Following a keynote address Wednesday, Microsoft Research Lab director Chris Bishop was asked if a monopoly on infrastructure and machine learning talent enjoyed by Big Tech companies is stifling innovation. In response, he argued that cloud computing allows developers to rent compute resources instead of undertaking the more expensive task of buying hardware that powers machine learning.

On Friday, the Resistance AI workshop highlighted research that urges tech companies to go beyond scale to address societal issues and compares Big Tech research funding to tactics carried out by Big Tobacco. That workshop was organized to bring together an intersectional group of marginalized people from a range of backgrounds to champion AI that gives power back to people and steers clear of oppression.

“We were frustrated with the limitations of ‘AI for good’ and how it could be co-opted as a form of ethics-washing,” organizers said in a statement to VentureBeat. “In some ways, we still have a long way to go: many of us are adjacent to big tech and academia, and we want to do better at engaging those who don’t have this kind of institutional power.”

This was also the first year that NeurIPS required attendees include societal impact and financial disclosure statements. Financial disclosures are due January when authors submit final versions of papers. Four papers were rejected by reviewers this year based on ethical grounds.

On a very different front of the future of AI this week, the technical effort behind putting on the NeurIPS research conference was historic. In all, 22,000 people attended the virtual conference, compared to 13,000 last year in Vancouver. The formula for how to make a virtual NeurIPS came out of ICLR and ICML, major AI research conferences held in the spring and summer respectively.

Prior to the pandemic, prominent AI researchers argued in favor of exploring more remote options as a way to cut the carbon footprint associated with flying to events held around the world. Some of those ideas were played out with short notice for the International Conference on Learning Representations (ICLR), the first major all-digital AI research conference.

Organizers say they learned that Zoom was not a great venue for poster sessions. Instead, NeurIPS poster sessions took place in gather.town, a spatial video chat service. Each user has an avatar and the ability to move freely between posters summarizing research.

One matter that hasn’t been resolved yet is whether AI research conferences will continue to offer a virtual attendance option when there is no longer a global pandemic. Going virtual means lower costs for organizers taking sponsorship money from corporations and access, but should that happen, an organizing committee member cautioned against virtual becoming a second-class experience compared to people who can afford to fly to attend an in-person event.

One participant in a Q&A session between attendees and organizers asked about hosting hybrid in person and virtual options then said the following: “I sincerely hope we are able to return to in person meetings. But I also think the benefits of the virtual experience should not be discarded, especially to enable more people to participate, who may face hardships in attending in person, such as for financial, visa related, or other reasons.”

It’s tough to say what lasting change comes from continuing efforts to address harm caused by AI or the virtual conference format, but between an AI ethics meltdown at Google and NeurIPS hosting the largest virtual AI conference held to date, the easiest conclusion to draw is that after this week, machine learning may never be the same, and I hope that’s a good thing.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: In firing Timnit Gebru, Google puts commercial interests ahead of ethics

December 5, 2020   Big Data
 AI Weekly: In firing Timnit Gebru, Google puts commercial interests ahead of ethics

When it comes to customer expectations, the pandemic has changed everything

Learn how to accelerate customer service, optimize costs, and improve self-service in a digital-first world.

Register here

This week, Timnit Gebru, a leading AI researcher, was fired from her position on an AI ethics team at Google in what she claims was retaliation for sending an email to colleagues critical of the company’s managerial practices. Reportedly the flashpoint was a paper Gebru coauthored that questioned the wisdom of building large language models and examined who benefits from (and who’s disadvantaged by) them.

Google AI lead Jeff Dean wrote in an email to employees following Gebru’s departure that the paper didn’t meet Google’s criteria for publication because it lacked reference to recent research. But from all appearances, Gebru’s work simply spotlighted well-understood problems with models like those deployed by Google, OpenAI, Facebook, Microsoft, and others. A draft obtained by VentureBeat discusses risks associated with deploying large language models including the impact of their carbon footprint on marginalized communities and their tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people.

Indeed, Gebru’s work appears to build on a number of recent studies examining the hidden costs of training and deploying large-scale language models. A team from the University of Massachusetts at Amherst found that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly five times the lifetime emissions of the average U.S. car. It’s a scientific fact that impoverished groups are more likely to experience significant health issues related to environmental concerns, with one study out of Yale University finding poor communities and those comprised predominantly of racial minorities experienced substantially higher exposure to air pollution compared to nearby affluent, white neighborhoods.

Gebru’s and colleagues’ assertion that language models can spout toxic content is similarly grounded in extensive prior research. In the language domain, a portion of the data used to train models is frequently sourced from communities with pervasive gender, race, and religious prejudices. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. This bias could be leveraged by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies that “radicalize individuals into violent far-right extremist ideologies and behaviors,” according to the Middlebury Institute of International Studies.

In his email, Dean accused Gebru and the paper’s other coauthors of disregarding advances showing greater efficiencies in training that might mitigate carbon impact and failing to take into account recent research to mitigate language model bias. But this seems disingenuous. In a paper published earlier this year, Google trained a massive language model — GShard — using 2,048 of its third-generation tensor processing units (TPUs), chips custom-designed for AI training workloads. One estimate pegs the wattage of a single TPU at around 200 watts per chip, suggesting that GShard required an enormous amount of power to train. And on the subject of bias, OpenAI, which made GPT-3 available via an API earlier this year, has only begun experimenting with safeguards including “toxicity filters” to limit harmful language generation.

In the draft paper, Gebru and colleagues reasonably suggest that large language models have the potential to mislead AI researchers and prompt the general public to mistake their text as meaningful, when the contrary is true. (Popular natural language benchmarks don’t measure AI models’ general knowledge well, studies show.) “If a large language model … can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” the paper reads. “We advocate for an approach to research that centers the people who stand to be affected by the resulting technology, with a broad view on the possible ways that technology can affect people.”

It’s no secret that Google has commercial interests in conflict with the viewpoints expressed in the paper. Many of the large language models it develops power customer-facing products including Cloud Translation API and Natural Language API. The company often touts its work in AI ethics and seemingly tolerated — if reluctantly — internal research critical of its approaches in the past. But the letting go of Gebru would appear to mark a shift in thinking among Google’s leadership, particularly in light of the company’s crackdowns on dissent, most recently in the form of illegal spying on employees before firing them. In any case, it bodes poorly for Google’s openness to debate about critical issues around AI and machine learning. And given its outsize influence in the research community, the effects could be far-ranging.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers— and be sure to subscribe to the AI Weekly newsletter and bookmark our The Machine.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: The state of machine learning in 2020

November 27, 2020   Big Data
 AI Weekly: The state of machine learning in 2020

When it comes to customer expectations, the pandemic has changed everything

Learn how to accelerate customer service, optimize costs, and improve self-service in a digital-first world.

Register here

It’s hard to believe, but a year in which the unprecedented seemed to happen every day is just weeks from being over. In AI circles, the end of the calendar year means the rollout of annual reports aimed at defining progress, impact, and areas for improvement.

The AI Index is due out in the coming weeks, as is CB Insights’ assessment of global AI startup activity, but two reports — both called The State of AI — have already been released.

Last week, McKinsey released its global survey on the state of AI, a report now in its third year. Interviews with executives and a survey of business respondents found a potential widening of the gap between businesses that apply AI and those that do not.

The survey reports that AI adoption is more common in tech and telecommunications than in other industries, followed by automotive and manufacturing. More than two-thirds of respondents with such use cases say adoption increased revenue, but fewer than 25% saw significant bottom-line impact.

Along with questions about AI adoption and implementation, the McKinsey State of AI report examines companies whose AI applications led to EBIT growth of 20% or more in 2019. Among the report’s findings: Respondents from those companies were more likely to rate C-suite executives as very effective, and the companies were more likely to employ data scientists than other businesses were.

At rates of difference of 20% to 30% or more compared to others, high-performing companies were also more likely to have a strategic vision and AI initiative road map, use frameworks for AI model deployment, or use synthetic data when they encountered an insufficient amount of real-world data. These results seem consistent with a Microsoft-funded Altimeter Group survey conducted in early 2019 that found half of high-growth businesses planned to implement AI in the year ahead.

If there was anything surprising in the report, it’s that only 16% of respondents said their companies have moved deep learning projects beyond a pilot stage. (This is the first year McKinsey asked about deep learning deployments.)

Also surprising: The report showed that businesses made little progress toward mounting a response to risks associated with AI deployment. Compared with responses submitted last year, companies taking steps to mitigate such risks saw an average 3% increase in response to 10 different kinds of risk — from national security and physical safety to regulatory compliance and fairness. Cybersecurity was the only risk that a majority of respondents said their companies are working to address. The percentage of those surveyed who consider AI risks relevant to their company actually dropped in a number of categories, including in the area of equity and fairness, which declined from 26% in 2019 to 24% in 2020.

McKinsey partner Roger Burkhardt called the survey’s risk results concerning.

“While some risks, such as physical safety, apply to only particular industries, it’s difficult to understand why universal risks aren’t recognized by a much higher proportion of respondents,” he said in the report. “It’s particularly surprising to see little improvement in the recognition and mitigation of this risk, given the attention to racial bias and other examples of discriminatory treatment, such as age-based targeting in job advertisements on social media.”

Less surprising, the survey found an uptick in automation in some industries during the pandemic. VentureBeat reporters have found this to be true across industries like agriculture, construction, meatpacking, and shipping.

“Most respondents at high performers say their organizations have increased investment in AI in each major business function in response to the pandemic, while less than 30% of other respondents say the same,” the report reads.

The McKinsey State of AI in 2020 global survey was conducted online from June 9 to June 19 and garnered nearly 2,400 responses, with 48% reporting that their companies use some form of AI. A 2019 McKinsey survey of roughly the same number of business leaders found that while nearly two-thirds of companies reported revenue increases due to the use of AI, many still struggled to scale its use.

The other State of AI

A month before McKinsey published its business survey, Air Street Capital released its State of AI report, which is now in its third year. The London-based venture capital firm found the AI industry to be strong when it comes to company funding rounds, but its report calls centralization of AI talent and compute “a huge problem.” Other serious problems Air Street Capital identified include ongoing brain drain from academia to industry and issues with reproducibility of models created by private companies.

A number of the report’s conclusions are in line with a recent analysis of AI research papers that found the concentration of deep learning activity among Big Tech companies, industry leaders, and elite universities is increasing inequality. The team behind this analysis says a growing “compute divide” could be addressed in part by the implementation of a national research cloud.

As we inch toward the end of the year, we can expect more reports on the state of machine learning. The state of AI reports released in the past two months demonstrate a variety of challenges but suggest AI can help businesses save money, generate revenue, and follow proven best practices for success. At the same time, researchers are identifying big opportunities to address the various risks associated with deploying AI.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: Tech, power, and building the Biden administration

November 14, 2020   Big Data
 AI Weekly: Tech, power, and building the Biden administration

Best practices for a successful AI Center of Excellence

A guide for both CoEs and business units

Download Guide

After the defeat of Donald Trump, there was little time between Joe Biden and Kamala Harris’ celebratory speeches and the start of conversations about transition team members and key administration appointments.

Some of the first names to emerge include people with tech backgrounds like former Google CEO Eric Schmidt who may be tapped to lead a tech industry panel in the White House. Since leaving Google, Schmidt extended his services to the Pentagon especially for machine learning. He also acted as head of the Defense Innovation Board at the Pentagon, and the National Security Commission on AI, a group advising Congress that more federal spending is needed to compete with China. NSCAI commissioners have so far recommended things like the creation of a government-run AI university and increasing public-private partnership in the semiconductor industry.

Hearing names like Schmidt and others raised questions about how close the administration will get with Big Tech at a time when tech companies are gaining reputations as the next Big Tobacco. Unlike when Biden first entered the White House in 2009, a number of sources today say Big Tech’s concentration of power is an accelerant of inequality.

A Department of Justice antitrust lawsuit against Google and a congressional committee investigation both found that Big Tech companies enjoy an edge based on compute, machine learning, access to large amounts of personal data, and wealth. The congressional report also concludes that Big Tech poses a threat to competitive free market economy but also democracy.

A paper covered by VentureBeat this week found that a compute divide is driving inequality in AI research, concentrating power, and giving an advantage to universities and Big Tech companies in the age of deep learning. The Biden campaign platform committed to increases in federal research and development spending in areas like AI and 5G up to $ 300 billion, spending that could help address that inequality as well as projects identified by groups like the NSCAI.

The Obama-Biden administration developed a reputation for bringing new concepts into the White House like appointment of a chief technology officer and chief data scientist, support for open access to data, and championing public service by people with tech skills, but that all seems like a long time ago.

Speaking to changing attitudes since then, Tim Wu, who testified as part of a congressional antitrust investigation into Congress, told the Financial Times “There has been a shift since the Obama administration, even among the people working in that administration, in the way they think about power in the tech world.”

Despite those changes, work to build civic tech that improves lives remains undone, said Nicole Wong, who served as deputy White House CTO. She entered the role shortly after the Edward Snowden leaks went public in 2014 and was responsible for privacy, internet, and innovation policy. She was also part of legal teams at Google and Twitter. Wong is now serving on a Biden review team for the National Security Council, according to Reuters.

In a speech delivered about a year ago at Aspen Tech Policy Hub in San Francisco, she said the government has outdated and inefficient tech, and that there’s a pipeline problem for people with tech skills who want to apply their talent to public service. Wong said she still believes that the government can make technology that improves human lives and that it’s important that it do so. Modernizing outdated government tech isn’t moonshot technology, she said, but public trust is at its lowest rate since the 1970s, a trend that started before Trump came into office. That pipeline issue is important because the decline in public trust is due in part to a failure to deliver for the people.

“That’s why the non-glamorous work of modernizing a 70-year-old system matters just as much or more as perfecting a self-driving car, or putting a person on Mars,” she said. “If we can order a gluten-free chocolate cake on our mobile phone while sitting in our living room and have it delivered in an hour then we should be able to help a single mother get food stamps without having to take a day off work and fill out paperwork and stand in line at a limited hours government office. We should be able to get our benefits to our veterans who fought for our country and the world that makes this tech possible.”

Some believe Biden plans to take on Big Tech companies like Facebook. Gene Kimmelman, who testified in favor of antitrust reform last year, will be part of the Department of Justice review team, for example. Others have concluded that initial appointments signal the opposite.

If you’re interested in seeing particulars about some of the tech connections, Protocol made an interactive graph that shows connections between acquaintances, family, and current and former employers. Who the Biden administration chooses may reflect its priorities, the diverse coalition that delivered the Biden ticket to the office, and may inspire people to the kind of public service that Wong talked about in order to solve moonshot problems and improve people’s lives. They can also reflect the shift in attitudes about Big Tech and power that Wu mentioned. The involvement of people like chief of staff Ron Klain seems to indicate they will at least believe in science.

In the days and weeks ahead, we will learn more about what the Biden cabinet and heads of federal agencies will look like. Building the Biden administration will have to take a lot of factors into account, from short term problems like a a global pandemic and urgent need for U.S. economic recovery, but also longer term issues like the decline in public trust in government, concentration of power by Big Tech, the continuing decline of democracy in our time, and the increase of surveillance and autocratic rule at a time of accelerating deployments of AI in business and government.

For AI coverage, send news tips to Khari Johnson, Kyle Wiggers, and Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer


Best practices for a successful AI Center of Excellence:

A guide for both CoEs and business units Access here


Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: Constructive ways to take power back from Big Tech

October 23, 2020   Big Data
 AI Weekly: Constructive ways to take power back from Big Tech

The audio problem

Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences.

Access here

Facebook launched an independent oversight board and recommitted to privacy reforms this week, but after years of promises made and broken, nobody seems convinced that real change is afoot. The Federal Trade Commission (FTC) is expected to decide whether to sue Facebook soon, sources told the New York Times, following a $ 5 billion fine last year.

In other investigations, the Department of Justice filed suit against Google this week, accusing the Alphabet company of maintaining multiple monopolies through exclusive agreements, collection of personal data, and artificial intelligence. News also broke this week that Google’s AI will play a role in creating a virtual border wall.

What you see in each instance is a powerful company insistent that it can regulate itself as government regulators appear to reach the opposite conclusion.

If Big Tech’s machinations weren’t enough, this week there was also news of a Telegram bot that undresses women and girls; AI being used to add or change the emotion of people’s faces in photos; and Clearview AI, a company being investigated in multiple countries, allegedly planning to introduce features for police to more responsibly use its facial recognition services. Oh, right, and there’s a presidential election campaign happening.

It’s all enough to make people reach the conclusion that they’re helpless. But that’s an illusion, one that Prince Harry, Duchess Meghan Markle, Algorithms of Oppression author Dr. Safiya Noble, and Center for Humane Technology director Tristan Harris attempted to dissect earlier this week in a talk hosted by Time. Dr. Noble began by acknowledging that AI systems in social media can pick up, amplify, and deepen existing systems of inequality like racism or sexism.

“Those things don’t necessarily start in Silicon Valley, but I think there’s really little regard for that when companies are looking at maximizing the bottom line through engagement at all costs, it actually has a disproportionate harm and cost to vulnerable people. These are things we’ve been studying for more than 20 years, and I think they’re really important to bring out this kind of profit imperative that really thrives off of harm,” Noble said.

As Markle pointed out during the conversation, the majority of extremists in Facebook groups got there because Facebook’s recommendation algorithm suggested they join those groups.

To act, Noble said pay attention to public policy and regulation. Both are crucial to conversations about how businesses operate.

“I think one of the most important things people can do is to vote for policies and people that are aware of what’s happening and who are able to truly intervene because we’re born into the systems that were born into,” she said. “If you ask my parents what it was like being born before the Civil Rights Act was passed, they had a qualitatively different life experience than I have. So I think part of what we have to do is understand the way that policy truly shapes the environment.”

When it comes to misinformation, Noble said people would be wise to advocate in favor of sufficient funding for what she called “counterweights” like schools, libraries, universities, and public media, which she said have been negatively impacted by Big Tech companies.

“When you have a sector like the tech sector that is so extractive — it doesn’t pay taxes, it offshores its profits, it defunds the democratic educational counterweights — those are the places where we really need to intervene. That’s where we make systemic long-term change, is to reintroduce funding and resources back into those spaces,” she said.

Forms of accountability make up one of five values found in many AI ethics principles. During the talk, Tristan Harris emphasized the need for systemic accountability and transparency in Big Tech companies so the public can better understand the scope of problems. For example, Facebook could form a board for the public to report harms; then Facebook can produce quarterly reports on progress toward removing those harms.

For Google, one way to increase transparency could be to release more information about AI ethics principle review requests made by Google employees. A Google spokesperson told VentureBeat that Google does not share this information publicly, beyond some examples. Getting that data on a quarterly basis might reveal more about the politics of Googlers than anything else, but I’d sure like to know if Google employees have reservations about the company increasing surveillance along the U.S.-Mexico border or which controversial projects attract the most objections at one of the most powerful AI companies on Earth.

Since Harris and others released The Social Dilemma on Netflix about a month ago, a number of people criticized the documentary for failing to include the voices of women, particularly Black women like Dr. Noble, who have spent years assessing issues undergirding The Social Dilemma, such as how algorithms can automate harm. That being said, it was a pleasure to see Harris and Noble speak together about how Big Tech can build more equitable algorithms and a more inclusive digital world.

For a breakdown of what The Social Dilemma misses, you can read this interview with Meredith Whittaker, which took place this week at a virtual conference. But she also contributes to the heartening conversation about solutions. One helpful piece of advice from Whittaker: Dismiss the idea that the algorithms are superhuman or superior technology. Technology isn’t infallible, and Big Tech isn’t magical. Rather, the grip large tech companies have on people’s lives is a reflection of the material power of large corporations.

“I think that ignores the fact that a lot of this isn’t actually the product of innovation. It’s the product of a significant concentration of power and resources. It’s not progress. It’s the fact that we all are now, more or less, conscripted to carry phones as part of interacting in our daily work lives, our social lives, and being part of the world around us,” Whittaker said. “I think this ultimately perpetuates a myth that these companies themselves tell, that this technology is superhuman, that it’s capable of things like hacking into our lizard brains and completely taking over our subjectivities. I think it also paints a picture that this technology is somehow impossible to resist, that we can’t push back against it, that we can’t organize against it.”

Whittaker, a former Google employee who helped organize a walkout at Google offices worldwide in 2018, also finds workers organizing within companies to be an effective solution. She encouraged employees to recognize methods that have proven effective in recent years, like whistleblowing to inform the public and regulators. Volunteerism and voting, she said, may not be enough.

“We now have tools in our toolbox across tech, like the walkout, a number of Facebook workers who have whistleblown and written their stories as they leave, that are becoming common sense,” she said.

In addition to understanding how power shapes perceptions of AI, Whittaker encourages people to try to better understand how AI influences our lives today. Amid so many other things this week, it might have been easy to miss, but the group AIandYou.org, which wants to help people understand how AI impacts their daily lives, dropped its first introductory video with Spelman College computer science professor Dr. Brandeis Marshall and actress Eva Longoria.

The COVID-19 pandemic, a historic economic recession, calls for racial justice, and the consequences of climate change have made this year challenging, but one positive outcome is that these events have led a lot of people to question their priorities and how each of us can make a difference.

The idea that tech companies can regulate themselves appears to some degree to have dissolved. Institutions are taking steps now to reduce Big Tech’s power, but even with Congress, the FTC, and the Department of Justice — the three main levers of antitrust — now acting to try to rein in the power of Big Tech companies, I don’t know a lot of people who are confident the government will be able to do so. Tech policy advocates and experts, for example, openly question whether factions Congress can muster the political will to bring lasting, effective change.

Whatever happens in the election or with antitrust enforcement, you don’t have to feel helpless. If you want change, people at the heart of the matter believe it will require, among other things, imagination, engagement with tech policy, and a better understanding of how algorithms impact our lives in order to wrangle powered interests and build a better world for ourselves and future generations.

As Whittaker, Noble, and the leader of the antitrust investigation in Congress have said, the power possessed by Big Tech can seem insurmountable, but if people get engaged, there are real reasons to hope for change.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer


The audio problem:

Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here


Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: U.S. lawmakers decry the chilling effect of federal surveillance at protests

October 17, 2020   Big Data
 AI Weekly: U.S. lawmakers decry the chilling effect of federal surveillance at protests

There’s a thread that runs through police violence against Black people and connects to overpolicing, onerous and problematic tactics like facial recognition, AI-powered predictive policing, and federal agencies’ surveillance of protestors. It’s almost a loop; at the very least, it’s a knot.

For months, American citizens have tirelessly protested against police violence, largely in response to the police killings of George Floyd and Breonna Taylor. Numerous reports allege that federal agencies have conducted surveillance on protestors. According to some members of Congress, this is creating a chilling effect on First Amendment rights: This week, Representatives Anna Eshoo (D-CA) and Bobby Rush (D-IL), along with Senator Ron Wyden (D-OR), sent a letter asking the Privacy and Civil Liberties Oversight Board (PCLOB), an independent federal agency, to investigate those reports.

“The act of protesting has played a central role in advancing civil rights in our country, and our Constitution protects the right of Americans to engage in peaceful protest unencumbered by government interference. We are, therefore, concerned that the federal government is infringing on this right,” reads the letter’s introduction.

Specifically, they want the PCLOB to investigate:

  • The federal government’s surveillance of recent protests
  • The legal authority supporting that surveillance
  • The government’s adherence to required procedures in using surveillance equipment
  • The chilling effect that federal government surveillance has had on protesters

The alleged surveillance measures include aircraft surveillance from Customs and Border Protection (CBP) that involved devices that collect people’s cell phone data, the Department of Homeland Security (DHS) seizing phones from protesters with the intention of extracting their data (a request that went unfulfilled apparently), and the DHS compiling information on journalists covering the protests (which seems to have stopped).

In a statement shared with VentureBeat, PCLOB board member Travis LeBlanc said, “I am deeply concerned by reports of the federal government’s surveillance of peaceful Black Lives Matter protesters exercising their constitutional rights. As the Privacy and Civil Liberties Oversight Board, we are empowered to conduct an independent investigation of any such government surveillance and I hope my fellow Board Members will join me in doing so promptly.”

The agency would not state what measures it may take as a result of its investigation, and indeed, its powers are somewhat limited. Formed in 2007 from the 9/11 Commission, the PCLOB’s two chief responsibilities are to oversee “implementation of Executive Branch policies, procedures, regulations, and information-sharing practices relating to efforts to protect the nation from terrorism” and to “review proposed legislation, regulations, and policies related to efforts to protect the nation from terrorism” in order to advise the executive branch of the U.S. government on how to meet its goals while preserving privacy and civil liberties. The PCLOB’s aegis expanded beyond terrorism with Section 803 of the 9/11 Commission Act, which requires that federal agencies submit reports about privacy and civil liberties reviews and complaints.

The agency has deep reach, at least — access to documents, and the right to interview anyone in the Executive Branch. But though it can conduct reviews and make recommendations, the only real legal action it can take is to subpoena people through the U.S. Attorney General’s office.

Though this week’s letter directly engages the PCLOB, it’s by no means the first salvo from concerned lawmakers. Earlier this year, Reps. Eshoo and Rush sent a letter of concern about surveillance and its chilling effect, signed by 33 other members of Congress, to heads of the FBI, Drug Enforcement Agency (DEA), National Guard Bureau, and CBP. “We demand that you cease any and all surveilling of Americans engaged in peaceful protests,” they wrote. They also demanded access to all documents these agencies have that pertain to the protests and surveillance. (The agency responses came by the barrelful and were included in a media announcement this week.) And in their most recent letter, Eshoo and Rush listed a dozen other letters of concern that members of Congress sent agencies and private companies expressing shades of these same concerns.

But the prior missives and responses have not, apparently, satisfied their concerns. In a statement to VentureBeat, Rep. Eshoo said, “It’s my hope that the PCLOB will conduct a thorough and independent investigation to uncover the facts about the allegations cited in my letter. These facts will help inform me and my colleagues about what actions Congress should take to prevent future abuses, update existing laws, and hold offenders accountable.”

The aforementioned thread continues through federal agencies’ protest surveillance to acts of aggression, intimidation, vigilantism, and in some cases violence. Unidentified agents in unmarked vehicles brazenly grabbed and detained protestors off the street in Oregon. Law enforcement directly or indirectly let an armed teenager roam the streets of Wisconsin, where he killed two protestors and injured a third. And the sitting President of the United States, during a nationally televised presidential debate, ominously told his supporters to watch the polls on election day — a thinly veiled overture to intimidate voters — and exhorted his white supremacist supporters to “stand by” and implied that they should be prepared to commit violence against leftist groups.

The chilling effect that Rep. Eshoo is so concerned about may follow the same path, from protests to the polls — which is all the more urgent given that the 2020 election is just over two weeks away. Preventing future abuses and holding offenders accountable is not merely the right thing to do, it’s crucial to a continued functioning democracy.


The audio problem:

Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here


Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: Nvidia’s Maxine opens the door to deepfakes and bias in video calls

October 10, 2020   Big Data
 AI Weekly: Nvidia’s Maxine opens the door to deepfakes and bias in video calls

Will AI power video chats of the future? That’s what Nvidia implied this week with the unveiling of Maxine, a platform that provides developers with a suite of GPU-accelerated AI conferencing software. Maxine brings AI effects including gaze correction, super-resolution, noise cancellation, face relighting, and more to end users, while in the process reducing how much bandwidth videoconferencing consumes. Quality-preserving compression is a welcome innovation at a time when videoconferencing is contributing to record bandwidth usage. But Maxine’s other, more cosmetic features raise uncomfortable questions about AI’s negative — and possibly prejudicial — impact.

A quick recap: Maxine employs AI models called generative adversarial networks (GANs) to modify faces in video feeds. Top-performing GANs can create realistic portraits of people who don’t exist, for instance, or snapshots of fictional apartment buildings. In Maxine’s case, they can enhance the lighting in a video feed and recomposite frames in real time.

Bias in computer vision algorithms is pervasive, with Zoom’s virtual backgrounds and Twitter’s automatic photo-cropping tool disfavoring people with darker skin. Nvidia hasn’t detailed the datasets or AI model training techniques it used to develop Maxine, but it’s not outside of the realm of possibility that the platform might not, for instance, manipulate Black faces as effectively as light-skinned faces. We’ve reached out to Nvidia for comment.

Beyond the bias issue, there’s the fact that facial enhancement algorithms aren’t always mentally healthy. Studies by Boston Medical Center and others show that filters and photo editing can take a toll on people’s self-esteem and trigger disorders like body dysmorphia. In response, Google earlier this month said it would turn off by default its smartphones’ “beauty” filters that smooth out pimples, freckles, wrinkles, and other skin imperfections. “When you’re not aware that a camera or photo app has applied a filter, the photos can negatively impact mental wellbeing,” the company said in a statement. “These default filters can quietly set a beauty standard that some people compare themselves against.”

That’s not to mention how Maxine might be used to get around deepfake detection. Several of the platform’s features analyze the facial points of people on a call and then algorithmically reanimate the faces in the video on the other side, which could interfere with the ability of a system to identify whether a recording has been edited. Nvidia will presumably build in safeguards to prevent this — currently, Maxine is available to developers only in early access — but the potential for abuse was a question the company hasn’t so far addressed.

None of this is to suggest that Maxine is malicious by design. Gaze correction, face relighting, upscaling, and compression seem useful. But the issues Maxine raises point to a lack of consideration for the harms its technology might cause, a tech industry misstep so common it’s become a cliche. The best-case scenario is that Nvidia takes steps (if it hasn’t already) to minimize the ill effects that might arise. The fact that the company didn’t reserve airtime to spell out these steps at Maxine’s unveiling, however, doesn’t instill confidence.

For AI coverage, send news tips to Khari Johnson, Kyle Wiggers, and Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: Amazon went wide with Alexa; now it’s going deep

September 26, 2020   Big Data
 AI Weekly: Amazon went wide with Alexa; now it’s going deep

Automation and Jobs

Read our latest special issue.

Open Now

Amazon’s naked ambition to become part of everyone’s daily lives was on full display this week at its annual hardware event. It announced a slew of new Alexa-powered devices, including a home surveillance drone, a suite of Ring-branded car alarm systems, and miscellany like an adorable little kids’ Echo device. But it’s clear Amazon’s strategy has shifted, even if only for a product cycle, from going wide to going deep.

Last year, Amazon baked its virtual assistant into any household device that could accommodate a chip. Its list of new widgets with Alexa seemed a mile long and included a menagerie of home goods, like lamps and microwaves. The company also announced device partnerships that ensure Alexa would live on some devices alongside other virtual assistants, tools to make it easier for developers to create Alexa skills, networking devices and capabilities, and wearables. It was a volume play and an aggressive bid to build out its ecosystem in even more markets.

This year, Amazon had fewer devices to announce, but it played up ways it has made Alexa itself better than ever. That’s the second prong of the strategy here: Get Alexa everywhere, then improve the marquee features such that the experience for users eclipses anything the competition offers.

As is always the case at these sorts of events, Amazon talked big and dreamy about all the new Alexa features. Users will find out for themselves whether this is the real deal or just hype when Amazon rolls out updates over the course of the next year (they’re landing on smart home devices first). But on paper and in the staged demos, Alexa’s new capabilities certainly seem to bring it a step closer to the holy grail of speaking to a virtual assistant just like talking to a person.

That’s the crux of what Amazon says it has done to improve Alexa, imbuing it with AI to make it more humanlike. This includes picking up nuances in speech and adjusting its own cadence, asking its human conversation partner for clarifications to fill in knowledge, and using feedback like “Alexa, that’s wrong” to learn and correct itself.

Amazon is particularly proud of the new natural turn-taking capabilities, which help Alexa understand the vagaries of human conversation. For example, in a staged demo two friends talked about ordering a pizza through an Alexa device. Like normal humans, they didn’t use each other’s names in the conversation, they paused to think, they changed their minds and adjusted the order, and so on. Alexa “knew” when to chime in, as well as when they were talking to each other and not to the Alexa device.

At the event, Alexa VP and head scientist Rohit Prasad said this required “real invention” and that the team went beyond just natural language processing (NLP) to embrace multisensory AI — acoustic, linguistic, and visual cues. And he said those all happen locally, on the device itself.

This is thanks to Amazon’s new AZ1 Neural Edge processor, which is designed to accelerate machine learning applications on-device instead of in the cloud. In the event liveblog, Amazon said: “With AZ1, powerful inference engines can run quickly on the edge — starting with an all-neural speech recognition model that will process speech faster, making Alexa even more responsive.” There are scant details available about the chip, but it likely portends a near future when Alexa devices are able to do more meaningful virtual assisting without an internet connection.

Given the utter lack of information about the AZ1, it’s impossible to say what it can or can’t do. But it would a potential game changer if it was able to handle all of Alexa’s new tricks on devices as simple as an Echo smart speaker. There could be positive privacy implications, too, if users were able to enjoy a newly powerful Alexa on-device, keeping their voice recordings from Amazon’s cloud.

But for Amazon, going deep isn’t just about a more humanlike Alexa; it involves pulling people further into its ecosystem, which Amazon hopes is the sum of adding device and service ubiquity to more engaging user experiences.

Part of that effort centers on Ring devices, which now include not just front-door home security products but also car security products and a small autonomous drone for the inside of your home. They’re essentially surveillance devices — and taken together, they form an ecosystem of surveillance devices and services that Amazon owns, and that connects to law enforcement. You can buy into it as deeply as you want, creating a surveillance bubble inside your home, around your home, and on board your vehicles, regardless of where you’ve parked them. The tension over Ring devices — what and who they record, where those recordings go, and who uses them for what purpose — will only be amplified by this in-home drone and the car alarm and camera.

Whether Amazon goes deep or wide, what hasn’t changed is that it wants to be omnipresent in our lives. And with every event’s worth of new devices and capabilities, the company takes another step closer to that goal.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: Cutting-edge language models can produce convincing misinformation if we don’t stop them

September 19, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

It’s been three months since OpenAI launched an API underpinned by cutting-edge language model GPT-3, and it continues to be the subject of fascination within the AI community and beyond. Portland State University computer science professor Melanie Mitchell found evidence that GPT-3 can make primitive analogies, and Columbia University’s Raphaël Millière asked GPT-3 to compose a response to the philosophical essays written about it. But as the U.S. presidential election nears, there’s growing concern among academics that tools like GPT-3 could be co-opted by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies. In a paper published by the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC), the coauthors find that GPT-3’s strength in generating “informational,” “influential” text could be leveraged to “radicalize individuals into violent far-right extremist ideologies and behaviors.”

Bots are increasingly being used around the world to sow the seeds of unrest, either through the spread of misinformation or the amplification of controversial points of view. An Oxford Internet Institute report published in 2019 found evidence of bots disseminating propaganda in 50 countries, including Cuba, Egypt, India, Iran, Italy, South Korea, and Vietnam. In the U.K., researchers estimate that half a million tweets about the country’s proposal to leave the European Union sent between June 5 and June 12 came from bots. And in the Middle East, bots generated thousands of tweets in support of Saudi Arabia’s crown prince Mohammed bin Salman following the 2018 murder of Washington Post opinion columnist Jamal Khashoggi.

Bot activity perhaps most relevant to the upcoming U.S. elections occurred last November, when cyborg bots spread misinformation during the local Kentucky elections. VineSight, a company that tracks social media misinformation, uncovered small networks of bots retweeting and liking messages casting doubt on the gubernatorial results before and after the polls closed.

But bots historically haven’t been sophisticated; most simply retweet, upvote, or favorite posts likely to prompt toxic (or violent) debate. GPT-3-powered bots or “cyborgs” — accounts that attempt to evade spam detection tools by fielding tweets from human operators — could prove to be far more harmful given how convincing their output tends to be. “Producing ideologically consistent fake text no longer requires a large corpus of source materials and hours of [training]. It is as simple as prompting GPT-3; the model will pick up on the patterns and intent without any other training,” the coauthors of the Middlebury Institute study wrote. “This is … exacerbated by GPT-3’s impressively deep knowledge of extremist communities, from QAnon to the Atomwaffen Division to the Wagner Group, and those communities’ particular nuances and quirks.”

 AI Weekly: Cutting edge language models can produce convincing misinformation if we don’t stop them

Above: A question-answer thread generated by GPT-3.

In their study, the CTEC researchers sought to determine whether people could color GPT-3’s knowledge with ideological bias. (GPT-3 was trained on trillions of words from the internet, and its architectural design enables fine-tuning through longer, representative prompts like tweets, paragraphs, forum threads, and emails.) They discovered that it only took a few seconds to produce a system able to answer questions about the world consistent with a conspiracy theory, in one case falsehoods originating from the QAnon and Iron March communities.

“GPT-3 can complete a single post with convincing responses from multiple viewpoints, bringing in various different themes and philosophical threads within far-right extremism,” the coauthors wrote. “It can also generate new topics and opening posts from scratch, all of which fall within the bounds of [the communities’] ideologies.”

CTEC’s analysis also found GPT-3 is “surprisingly robust” with respect to multilingual language understanding, demonstrating an aptitude for producing Russian-language text in response to English prompts that show examples of right-wing bias, xenophobia, and conspiracism. The model also proved “highly effective” at creating extremist manifestos that were coherent, understandable, and ideologically consistent, communicating how to justify violence and instructing on anything from weapons creation to philosophical radicalization.

 AI Weekly: Cutting edge language models can produce convincing misinformation if we don’t stop them

Above: GPT-3 writing extremist manifestos.

“No specialized technical knowledge is required to enable the model to produce text that aligns with and expands upon right-wing extremist prompts. With very little experimentation, short prompts produce compelling and consistent text that would believably appear in far-right extremist communities online,” the researchers wrote. “GPT-3’s ability to emulate the ideologically consistent, interactive, normalizing environment of online extremist communities poses the risk of amplifying extremist movements that seek to radicalize and recruit individuals. Extremists could easily produce synthetic text that they lightly alter and then employ automation to speed the spread of this heavily ideological and emotionally stirring content into online forums where such content would be difficult to distinguish from human-generated content.”

OpenAI says it’s experimenting with safeguards at the API level including “toxicity filters” to limit harmful language generation from GPT-3. For instance, it hopes to deploy filters that pick up antisemitic content while still letting through neutral content talking about Judaism.

Another solution might lie in a technique proposed by Salesforce researchers including former Salesforce chief scientist Richard Socher. In a recent paper, they describe GeDi (short for “generative discriminator”), a machine learning algorithm capable of “detoxifying” text generation by language models like GPT-3’s predecessor, GPT-2. During one experiment, the researchers trained GeDi as a toxicity classifier on an open source data set released by Jigsaw, Alphabet’s technology incubator. They claim that GeDi-guided generation resulted in significantly less toxic text than baseline models while achieving the highest linguistic acceptability.

 AI Weekly: Cutting edge language models can produce convincing misinformation if we don’t stop them

But technical mitigation can only achieve so much. CTEC researchers recommend partnerships between industry, government, and civil society to effectively manage and set the standards for use and abuse of emerging technologies like GPT-3. “The originators and distributors of generative language models have unique motivations to serve potential clients and users. Online service providers and existing platforms will need to accommodate for the impact of the output from such language models being utilized with the use of their services,” the researchers wrote. “Citizens and the government officials who serve them may empower themselves with information about how and in what manner creation and distribution of synthetic text supports healthy norms and constructive online communities.”

It’s unclear the extent to which this will be possible ahead of the U.S. presidential election, but CTEC’s findings make apparent the urgency. GPT-3 and like models have destructive potential if not properly curtailed, and it will require stakeholders from across the political and ideological spectrum to figure out how they might be deployed both safely and responsibly.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • Rickey Smiley To Host 22nd Annual Super Bowl Gospel Celebration On BET
    • Kili Technology unveils data annotation platform to improve AI, raises $7 million
    • P3 Jobs: Time to Come Home?
    • NOW, THIS IS WHAT I CALL AVANTE-GARDE!
    • Why the open banking movement is gaining momentum (VB Live)
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited