• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Category Archives: Business Intelligence

Cashierless tech could detect shoplifting, but bias concerns abound

January 24, 2021   Big Data

How open banking is driving huge innovation

Learn how fintechs and forward-thinking FIs are accelerating personalized financial products through data-rich APIs.

Register Now


As the pandemic continues to rage around the world, it’s becoming clear that COVID-19 will endure longer than some health experts initially predicted. Owing in part to slow vaccine rollouts, rapidly spreading new strains, and politically charged rhetoric around social distancing, the novel coronavirus is likely to become endemic, necessitating changes in the ways we live our lives.

Some of those changes might occur in brick-and-mortar retail stores, where touch surfaces like countertops, cash, credit cards, and bags are potential viral spread vectors. The pandemic appears to have renewed interest in cashierless technology like Amazon Go, Amazon’s chain of stores that allow shoppers to pick up and purchase items without interacting with a store clerk. Indeed, Walmart, 7-Eleven, and cashierless startups including AiFi, Standard, and Grabango have expanded their presence over the past year.

But as cashierless technology becomes normalized, there’s a risk it could be used for purposes beyond payment, particularly shoplifting detection. While shoplifting detection isn’t problematic on its face, case studies illustrate that it’s susceptible to bias and other flaws that could, at worst, result in false positives.

Synthetic datasets

The bulk of cashierless platforms rely on cameras, among other sensors, to monitor the individual behaviors of customers in stores as they shop. Video footage from the cameras feed into machine learning classification algorithms, which identify when a shopper picks up and places an item in a shopping cart, for example. During a session at Amazon’s re:Mars conference in 2019, Dilip Kumar, VP of Amazon Go, explained that Amazon engineers use errors like missed item detections to train the machine learning models that power its Go stores’ cashierless experiences. Synthetic datasets boost the diversity of the training data and ostensibly the robustness of the models, which use both geometry and deep learning to ensure transactions are associated with the right customer.

The problem with this approach is that synthetic datasets, if poorly audited, might encode biases that machine learning models then learn to amplify. Back in 2015, a software engineer discovered that the image recognition algorithms deployed in Google Photos, Google’s photo storage service, were labeling Black people as “gorillas.” Google’s Cloud Vision API recently mislabeled thermometers held by people with darker skin as guns. And countless experiments have shown that image-classifying models trained on ImageNet, a popular (but problematic) dataset containing photos scraped from the internet, automatically learn humanlike biases about race, gender, weight, and more.

Jerome Williams, a professor and senior administrator at Rutgers University’s Newark campus, told NBC that a theft-detection algorithm might wind up unfairly targeting people of color, who are routinely stopped on suspicion of shoplifting more often than white shoppers. A 2006 study of toy stores found that not only were middle-class white women often given preferential treatment, but also that the police were never called on them, even when their behavior was aggressive. And in a recent survey of Black shoppers published in the Journal of Consumer Culture, 80% of respondents reported experiencing racial stigma and stereotypes when shopping.

 Cashierless tech could detect shoplifting, but bias concerns abound

“The people who get caught for shoplifting is not an indication of who’s shoplifting,” Williams told NBC. In other words, Black shoppers who feel they’ve been scrutinized in stores might be more likely to appear nervous while shopping, which might be perceived by a system as suspicious behavior. “It’s a function of who’s being watched and who’s being caught, and that’s based on discriminatory practices.”

Some solutions are explicitly designed to detect shoplifting track gait — patterns of limb movements — among other physical characteristics. It’s a potentially problematic measure considering that disabled shoppers, among others, might have gaits that appear suspicious to an algorithm trained on footage of able-bodied shoppers. As the U.S. Department of Justice’s Civil Rights Division, Disability Rights Section notes, some people with disabilities have a stagger or slurred speech related to neurological disabilities, mental or emotional disturbance, or hypoglycemia, and these characteristics may be misperceived as intoxication, among other states.

Tokyo startup Vaak’s anti-theft product, VaakEye, was reportedly trained on more than 100 hours of closed-circuit television footage to monitor the facial expressions, movements, hand movements, clothing choices, and over 100 other aspects of shoppers. AI Guardsman, a joint collaboration between Japanese telecom company NTT East and tech startup Earth Eyes, scans live video for “tells” like when a shopper looks for blind spots or nervously checks their surroundings.

NTT East, for one, makes no claims that its algorithm is perfect. It sometimes flags well-meaning customers who pick up and put back items and salesclerks restocking store shelves, a spokesperson for the company told The Verge. Despite this, NTT East claimed its system couldn’t be discriminatory because it “does not find pre-registered individuals.”

Walmart’s AI- and camera-based anti-shoplifting technology, which is provided by Everseen, came under scrutiny last May over its reportedly poor detection rates. In interviews with Ars Technica, Walmart workers said their top concern with Everseen was false positives at self-checkout. The employees believe that the tech frequently misinterprets innocent behavior as potential shoplifting.

Industry practices

Trigo, which emerged from stealth in July 2018, aims to bring checkout-less experiences to existing “medium to small” brick-and-mortar convenience stores. For a monthly subscription fee, the company supplies both high-resolution, ceiling-mounted cameras and an on-premises “processing unit” that runs machine learning-powered tracking software. Data is beamed from the unit to a cloud processing provider, where it’s analyzed and used to improve Trigo’s algorithms.

Trigo claims that it anonymizes the data it collects, that it can’t identify individual shoppers beyond the products they’ve purchased, and that its system is 99.5% accurate on average at identifying purchases. But when VentureBeat asked about what specific anti-shoplifting detection features the product offers and how Trigo trains algorithms that might detect theft, the company declined to comment.

Grabango, a cashierless tech startup founded by Pandora cofounder Will Glaser, also declined to comment for this article. Zippin says it requires shoppers to check in with a payment method and that staff is alerted only when malicious actors “sneak in somehow.” And Standard Cognition, which claims its technology can account for changes like when a customer puts back an item they initially considered purchasing, says it doesn’t and hasn’t ever offered shoplifting detection capabilities to its customers.

“Standard does not monitor for shoplifting behavior and we never have … We only track what people pick up or put down so we know what to charge them for when they leave the store. We do this anonymously, without biometrics,” CEO Jordan Fisher told VentureBeat via email. “An AI-driven system that’s trained responsibly with diverse sets of data should in theory be able to detect shoplifting without bias. But Standard won’t be the company doing it. We are solely focused on the checkout-free aspects of this technology.”

 Cashierless tech could detect shoplifting, but bias concerns abound

Above: OTG’s Cibo Express is the first confirmed brand to deploy Amazon’s “Just Walk Out” cashierless technology.

Separate interviews with The New York Times and Fast Company in 2018 tell a different story, however. Michael Suswal, Standard Cognition’s cofounder and chief operating officer, told The Times that Standard’s platform could look at a shopper’s trajectory, gaze, and speed to detect and alert a store attendant to theft via text message. (In the privacy policy on its website, Standard says it doesn’t collect biometric identifiers but does collect information about “certain body features.”) He also said that Standard hired 100 actors to shop for hours in its San Francisco demo store in order to train its algorithms to recognize shoplifting and other behaviors.

“We learn behaviors of what it looks like to leave,” Suswal told The Times. “If they’re going to steal, their gait is larger, and they’re looking at the door.”

A patent filed by Standard in 2019 would appear to support the notion that Standard developed a system to track gait. The application describes an algorithm trained on a collection of images that can recognize the physical features of customers moving in store aisles between shelves. This algorithm is designed to identify one of 19 different on-body points including necks, noses, eyes, ears, shoulders, elbows, wrists, hips, ankles, and knees.

Santa Clara-based AiFi also says its cashierless solution can recognize “suspicious behavior” inside of stores within a defined set of shopping behaviors. Like Amazon, the company uses synthetic datasets to generate a set of training and testing data without requiring customer data. “With simulation, we can randomize hairstyle, color, clothing, and body shape to ensure that we have a diverse and unbiased datasets,” a spokesperson told VentureBeat. “We respect user privacy and do not use facial recognition or personally identifiable information. It is our mission to change the future of shopping to make it automated, privacy-conscious, and inclusive.”

A patent filed in 2019 by Accel Robotics reveals the startup’s proposed anti-shoplifting solution, which optionally relies on anonymous tags that don’t reveal a person’s identity. By analyzing camera images over time, a server can attribute motion to a person and purportedly infer whether they took items from a shelf with malintent. Shopper behavior can be tracked over multiple visits if “distinguishing characteristics” are saved and retrieved for each visitor, which could be used to identify shoplifters who’ve previously stolen from the store.

“[The system can be] configured to detect shoplifting when the person leaves the store without paying for the item. Specifically, the person’s list of items on hand (e.g., in the shopping cart list) may be displayed or otherwise observed by a human cashier at the traditional cash register screen,” the patent description reads. “The human cashier may utilize this information to verify that the shopper has either not taken anything or is paying/showing for all items taken from the store. For example, if the customer has taken two items from the store, the customer should pay for two items from the store.”

Lack of transparency

For competitive reasons, cashierless tech startups are generally loath to reveal the technical details of their systems. But this does a disservice to the shoppers subjected to them. Without transparency regarding the applications of these platforms and the ways in which they’re developed, it will likely prove difficult to engender trust among shoppers, shoplifting detection capabilities or no.

Zippin was the only company VentureBeat spoke with that volunteered information about the data used to train its algorithms. It said that depending on the particular algorithm to be trained, the size of the dataset varies from a few thousand to a few million video clips, with training performed in the cloud and models deployed to the stores after training. But the company declined to say what steps it takes to ensure the datasets are sufficiently diverse and unbiased, whether it uses actors or synthetic data, and whether it continuously retrains algorithms to correct for errors.

Systems like AI Guardsman learn from their mistakes over time by letting store clerks and managers flag false positives as they occur. It’s a step in the right direction, but without more information about how these system work, it’s unlikely to allay shoppers’ concerns about bias and surveillance.

Experts like Christopher Eastham, a specialist in AI at the law firm Fieldfisher, call for frameworks to regulate the technology. And even Ryo Tanaka, the founder of Vaak, argues there should be notice before customers enter stores so that they can opt out. “Governments should operate rules that make stores disclose information — where and what they analyze, how they use it, how long they use it,” he told CNN.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers propose Porcupine, a compiler for homomorphic encryption

January 23, 2021   Big Data
 Researchers propose Porcupine, a compiler for homomorphic encryption

How open banking is driving huge innovation

Learn how fintechs and forward-thinking FIs are accelerating personalized financial products through data-rich APIs.

Register Now


Homomorphic encryption (HE) is a privacy-preserving technology that enables computational workloads to be performed directly on encrypted data. HE enables secure remote computation, as cloud service providers can compute on data without viewing highly sensitive content. But despite its appeal, performance and programmability challenges remain a barrier to HE’s widespread adoption.

Realizing the potential of HE will likely require developing a compiler that can translate a plaintext, unencrypted codebase into encrypted code on the fly. In a step toward this, researchers at Facebook, New York University, and Stanford created Porcupine, a “synthesizing compiler” for HE. They say it results in speedups of up to 51% compared to heuristic-driven, entirely hand-optimized code.

Given a reference of a plaintext code, Porcupine synthesizes HE code that performs the same computation, the researchers explain. Internally, Porcupine models instruction noise, latency, behavior, and HE program semantics with a component called Quill. Quill enables Porcupine to reason about and search for HE kernels that are verifiably correct while minimizing the code’s latency and noise accumulation. The result is a suite that automates and optimizes the mapping and scheduling of plaintext to HE code.

In experiments, the researchers evaluated Porcupine using a range of image processing and linear algebra programs. According to the researchers, for small programs, Porcupine was able to find the same optimized implementations as hand-written baselines. And on larger, more complex programs, Porcupine discovered optimizations like factorization and even application-specific optimizations involving separable filters.

“Our results demonstrate the efficacy and generality of our synthesis-based compilation approach and further motivates the benefits of automated reasoning in HE for both performance and productivity,” the researchers wrote. “Porcupine abstracts away the details of constructing correct HE computation so that application designers can concentrate on other design considerations.”

Enthusiasm for HE has given rise to a cottage industry of startups aiming to bring it to production systems. Newark, New Jersey-based Duality Technologies, which recently attracted funding from one of Intel’s venture capital arms, pitches its HE platform as a privacy-preserving solution for “numerous” enterprises, particularly those in regulated industries. Banks can conduct privacy-enhanced financial crime investigations across institutions, so goes the company’s sales pitch, while scientists can tap it to collaborate on research involving patient records.

But HE offers no magic bullet. Even leading techniques can calculate only polynomial functions — a nonstarter for the many activation functions in machine learning that are non-polynomial. Plus, operations on encrypted data can involve only additions and multiplications of integers, which poses a challenge in cases where learning algorithms require floating point computations.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

What mean should I use for this exemple?

January 23, 2021   BI News and Info

 What mean should I use for this exemple?

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

Search SQL Server error log files

January 23, 2021   BI News and Info

Each instance of SQL Server logs information about its processing to a file known as the error log. Depending on how long an instance has been up and what is being logged, the log files might be small or large. When the log files are small, they are fairly easy to browse using SQL Server Management Studio (SSMS). But when they are large, it is cumbersome to browse through them to find individual error log messages using SSMS. There are even times when the error log file is so large it can’t even be opened up using SSMS. This article will show you a few different ways to browse and search SQL Server error log files.

Using SSMS to search and filter large SQL Server error log files

When browsing a large error log file with SSMS, it can take a long time just to scroll through the file to find the portion of the log that you might be interested in reviewing. I find it easier to use the search and filter options to find the information in large error log files. I’ll demonstrate how to use these options to find information in large error log files.

Using the search option

The search option is useful for finding the next occurrence of a string of characters in the log. To search, you can just browse through one of the archived log files, as shown in Figure 1.

browsing my error log file to search sql server er Search SQL Server error log files

Figure 1: Browsing my error log file

Figure 1 shows the beginning of the error log file, and the entries are sorted by the date/time from the oldest to the newest. You can see the Search function outlined by a red box at the top of the screenshot. To use the search function, just click on this search icon, which brings up the search dialog shown in Figure 2.

search selection dialog Search SQL Server error log files

Figure 2: Search selection dialog

To search, just enter the string of characters you want to find in the Search for: field. The characters can be case-insensitive or case-sensitive based on whether the Match case check box is checked. You can also search just the Message column or all the columns depending on if the Search Message column only box is checked. When an error log file spans many days, you could uncheck this checkbox to search for a particular date/time string in the log. By doing this, the error log can be reposition to display a specific day in the log in a log file that contains multiple days.

For this demonstration, enter the string error in the Search for: criteria. Once the search criteria are filled in, the Search button is enabled, as shown in Figure 3.

enabling search button search sql server error log Search SQL Server error log files

Figure 3: Enabling Search Button

When clicking the Search button, the error log position is relocated to the first occurrence of the string error, as shown in Figure 4.

repositioned to first occurrence of the string Search SQL Server error log files

Figure 4: Repositioned to first occurrence of the string

Click the Search button again to move to the next message text that contains the string error, as shown in Figure 5.

next occurrence of the string error Search SQL Server error log files

Figure 5: Next occurrence of the string “error”

By reviewing Figure 5, you can see the search function found the string error just a few lines down further in the log (the actual string error is located out of view to the right). By clicking the search button repeatedly, you can progressively work through the large error log file finding all the messages that contain the string error. Once the last message is found, the search will start over from the top if you click the button again.

Using the search button repeatedly could be a little tedious, especially if the log file contains many messages with string error. Another way to find all the messages without clicking and scrolling is to use the filter option.

Using the Filter Option

The filter option makes it a little easier to find all the occurrences of a string in the error log file. It does this by sifting through a large error log file and only displaying those rows that meet the filter criteria. Filtering is handy when you want to view specific log entries in a very large log file. To bring up the filter criteria, you need to click on the Filter options in the Log File Viewer window, as shown in Figure 6.

selecting the filter option Search SQL Server error log files

Figure 6: Selecting the Filter Option

When the filter option is clicked, the dialog box in Figure 7 is shown.

filter options Search SQL Server error log files

Figure 7: Filter Options

As you can see from Figure 7, there are several different filter selection options from which to choose. You can use one, or more of these filter options to identify those error log records you want to display. Table 1 lists the descriptions for each of these different filter options.

Table 1: Descriptions for each filter option

Filter Name

Description

User

The user name that is associated with the log entry

Computer

The computer that is associated with the log entry

Start Date

Log entry must be created on or after this date

End Date

Log entry must be created on or before this date

Message contains text

Log entry message must contain this text (case-insensitive)

Source

The source of the log entry

Instance Name

The instance Name that is associated with the log entry

Event

The event id that is associated with the windows log entry

To demonstrate how to use the filter dialog to find specific error logs, first try to find the ERRORLOG file directory name using the Message contains text filter item. The error log directory name is displayed on an error log line item that contains the string Logging SQL Server messages in the message text. Therefore, all you need to do is enter this string in the Message contains text filter item, check the Apply filter checkbox, and then click on the OK button, as shown in Figure 8.

applying filter Search SQL Server error log files

Figure 8: Applying Filter

After clicking the OK button, only the error log lines that contain the text are displayed, as shown in Figure 9. If the Apply Filter checkbox is not checked, before clicking on the OK button, the filter won’t be applied.

word image 63 Search SQL Server error log files

Figure 9: Results of message text filter

Using the filter item is especially useful for finding those messages that are hidden amongst all the messages you are not interested in. I also find using the Start Date and End Date filters extremely useful to find log entries for a specific date range. The date range filter is handy when the error log file is very large and contains multiple days of error log records.

Out of memory errors when viewing large logs

If SQL Server has been up for a while and the error log has not been cycled, or a lot of messages have been written to the log file over a short time, then the error log might be very large — possibly in the gigabyte size range. If you try to open one of these gigabyte log files using SSMS, a memory exception will occur. Figure 10 shows the out of memory exception that can occur when opening one of the large error log files.

word image 64 Search SQL Server error log files

Figure 10: Out of memory exception when trying to view a large error log file

I got this error when I tried to open one of my large, archived log files that was over 8 GB in size. When this error occurred, some of my log records were loaded into the viewer. I could still use the search option, but I got another memory exception when I tried to use the filter option.

If you are trying to use the SSMS to view large log files and having memory issues, this doesn’t mean you are out of luck. There are other options to view, search and filter these large log files.

Using a text editor to view a large log file

One option to view a large log file is to use a text editor. But it can’t just be any text editor; it needs to be a text editor that can read a large file. I have downloaded and used UltraEdit in the past to open large error log files. I’m not endorsing UltraEdit; I only mention it here because it is one of the editors I have used in the past to look at large log files. Keep in mind that UltraEdit is not free software; you need to have a license to use this product long-term. Before you consider downloading any text editor off the internet, make sure you understand the software’s uses and license requirements being downloaded.

Programmatically searching the error log file

Another option for searching those larger log files is to do it programmatically. SQL Server provides an undocumented extended stored procedure named xp_readerrorlog that can be used to search the error log and the SQL Agent log files.

Listing 1 is an example of how I used this undocumented stored procedure to search the active error log file on one of my instances of SQL Server.

Listing 1: Using xp_readerrorlog to find the location of error log file

exec xp_readerrorlog 0,1,N‘Logging SQL Server messages in file’;

This example searches for the string Logging SQL Server messages in file in the active log file. The output shown in Figure 11 is returned when running the command.

word image 65 Search SQL Server error log files

Figure 11: Output from running code in Listing 1

The log record that identified the file location where the error log messages are being written can be found by searching for this particular string in the active log file.

Even though this stored procedure is undocumented, there are many resources out there that explain how to use it. This stored procedure supports seven parameters. Those parameters are described in Table 2.

Table 2: Parameters for xp_readerrorlog

Parameter

Description

1

Identifies the error log file that you would like to read.  Set this parm to 0 if you’d like to read the current error log.  Or you can set it to either 1, 2, 3, etc. to read one of the historical error log files.

2

Identifies which error log to search.  1, or null for ERRORLOG, or 2 for the SQL Agent log

3

The first string you want to search for in the error log file.

4

The second string you want to search for in the error log file.

5

The start time constraint on searching.

6

The end time constraint on searching. 

7

Sort order of the output (ascending, descending)

Finding all the records in a large log file that contained the word error can easily be done by just changing the search string in parameter 3 of the code in Listing 1. You can write a short T-SQL script to find all the log records from the active SQL Server log file for yesterday and then place them in a temporary table for further analysis using the code in Listing 2.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

– Declare Variables needed

DECLARE @StartDate date,

        @EndDate   date;

– Create temporary table to how error log records

CREATE TABLE #ErrorLogForYesterday (

  LogDate datetime,

  ProcessInfo varchar(max),

  Text varchar(max));

SET @StartDate = dateadd(dd,-1,getdate()); – Yesterdays Date

Set @EndDate = getdate(); – Todays Date

– Extract error log records for yesterday in to temporary table

INSERT INTO #ErrorLogForYesterday EXEC xp_readerrorlog

            0,1,N”,N”,@StartDate,@EndDate;

– Display error log records extracted

SELECT * FROM #ErrorLogForYesterday;

– Cleanup

DROP TABLE #ErrorLogForYesterday;

Listing 2: Code to extract yesterday’s error log records

Programmatically finding error log records makes it easy to build processes to analyze the error log file. Using the method in Listing 2, a DBA could create a series of scripts that could programmatically run the xp_readerrorlog stored procedure to quickly analyze the different error log files.

Reading and Searching SQL Server Error Log Files

When SQL Server creates large error log files, it presents challenges for DBAs to read them. Large log files are cumbersome to scroll through to find errors. Luckily, the log view functionality of SSMS has the Filter and Search features built-in to allow a DBA to find strings within these large log files quickly. Additionally, using TSQL code to call the undocumented xp_readerrorlog stored procedure, allows a DBA to build scripts to read those large log files. Using these different methods to find errors in large SQL Server log files is critical for managing and maintaining SQL Server.

If you like this article, you might also like SQL Server Error Log Configuration – Simple Talk (red-gate.com)

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Center for Applied Data Ethics suggests treating AI like a bureaucracy

January 22, 2021   Big Data
 Center for Applied Data Ethics suggests treating AI like a bureaucracy

How open banking is driving huge innovation

Learn how fintechs and forward-thinking FIs are accelerating personalized financial products through data-rich APIs.

Register Now


A recent paper from the Center for Applied Data Ethics (CADE) at the University of San Francisco urges AI practitioners to adopt terms from anthropology when reviewing the performance of large machine learning models. The research suggests using this terminology to interrogate and analyze bureaucracy, states, and power structures in order to critically assess the performance of large machine learning models with the potential to harm people.

“This paper centers power as one of the factors designers need to identify and struggle with, alongside the ongoing conversations about biases in data and code, to understand why algorithmic systems tend to become inaccurate, absurd, harmful, and oppressive. This paper frames the massive algorithmic systems that harm marginalized groups as functionally similar to massive, sprawling administrative states that James Scott describes in Seeing Like a State,” the author wrote.

The paper was authored by CADE fellow Ali Alkhatib, with guidance from director Rachel Thomas and CADE fellows Nana Young and Razvan Amironesei.

The researchers particularly look to the work of James Scott, who has examined hubris in administrative planning and sociotechnical systems. In Europe in the 1800s, for example, timber industry companies began using abridged maps and a field called “scientific forestry” to carry out monoculture planting in grids. While the practice resulted in higher initial yields in some cases, productivity dropped sharply in the second generation, underlining the validity of scientific principles favoring diversity. Like those abridged maps, Alkhatib argues, algorithms can both summarize and transform the world and are an expression of the difference between people’s lived experiences and what bureaucracies see or fail to see.

The paper, titled “To Live in Their Utopia: Why Algorithmic Systems Create Absurd Outcomes,” was recently published and accepted by the ACM Conference on Human Factors in Computing Systems (CHI), which will be held in May.

Recalling Scott’s analysis of states, Alkhatib warns against harms that can result from unhampered AI, including the administrative and computational reordering of society, a weakened civil society, and the rise of an authoritarian state. Alkhatib notes that such algorithms can misread and punish marginalized groups whose experiences do not fit within the confines of data considered to train a model.

People privileged enough to be considered the default by data scientists and who are not directly impacted by algorithmic bias and other harms may see the underrepresentation of race or gender as inconsequential. Data Feminism authors Catherine D’Ignazio and Lauren Klein describe this as “privilege hazard.” As Alkhatib put it, “other people have to recognize that race, gender, their experience of disability, or other dimensions of their lives inextricably affect how they experience the world.”

He also cautions against uncritically accepting AI’s promise of a better world.

“AIs cause so much harm because they exhort us to live in their utopia,” the paper reads. “Framing AI as creating and imposing its own utopia against which people are judged is deliberately suggestive. The intention is to square us as designers and participants in systems against the reality that the world that computer scientists have captured in data is one that surveils, scrutinizes, and excludes the very groups that it most badly misreads. It squares us against the fact that the people we subject these systems to repeatedly endure abuse, harassment, and real violence precisely because they fall outside the paradigmatic model that the state — and now the algorithm — has constructed to describe the world.”

At the same time, Alkhatib warns people not to see AI-driven power shifts as inevitable.

“We can and must more carefully reckon with the parts we play in empowering algorithmic systems to create their own models of the world, in allowing those systems to run roughshod over the people they harm, and in excluding and limiting interrogation of the systems that we participate in building.”

Potential solutions the paper offers include undermining oppressive technologies and following the guidance of Stanford AI Lab researcher Pratyusha Kalluri, who advises asking whether AI shifts power, rather than whether it meets a chosen numeric definition of fair or good. Alkhatib also stresses the importance of individual resistance and refusal to participate in unjust systems to deny them power.

Other recent solutions include a culture change in computer vision and NLP, reduction in scale, and investments to reduce dependence on large datasets that make it virtually impossible to know what data is being used to train deep learning models. Failure to do so, researchers argue, will leave a small group of elite companies to create massive AI models such as OpenAI’s GPT-3 and the trillion-parameter language model Google introduced earlier this month.

The paper’s cross-disciplinary approach is also in line with a diverse body of work AI researchers have produced within the past year. Last month, researchers released the first details of OcéanIA, which treats a scientific project for identifying phytoplankton species as a challenge for machine learning, oceanography, and science. Other researchers have advised a multidisciplinary approach to advancing the fields of deep reinforcement learning and NLP bias assessment.

We’ve also seen analysis of AI that teams sociology and critical race theory, as well as anticolonial AI, which calls for recognizing the historical context associated with colonialism in order to understand which practices to avoid when building AI systems. And VentureBeat has written extensively about the fact that AI ethics is all about power.

Last year, a cohort of well-known members of the algorithmic bias research community created an internal algorithm-auditing framework to close AI accountability gaps within organizations. That work asks organizations to draw lessons from the aerospace, finance, and medical device industries. Coauthors of the paper include Margaret Mitchell and Timnit Gebru, who used to lead the Google AI ethics team together. Since then, Google has fired Gebru and, according to a Google spokesperson, opened an investigation into Mitchell.

With control of the presidency and both houses of Congress in the U.S., Democrats could address a range of tech policy issues in the coming years, from laws regulating the use of facial recognition by businesses, governments, and law enforcement to antitrust actions to rein in Big Tech. However, a 50-50 Senate means Democrats may be forced to consider bipartisan or moderate positions in order to pass legislation.

The Biden administration emphasized support for diversity and distaste for algorithmic bias in a televised ceremony introducing the science and technology team on January 16. Vice President Kamala Harris has also spoken passionately against algorithmic bias and automated discrimination. In the first hours of his administration, President Biden signed an executive order to advance racial equality that instructs the White House Office of Science and Technology Policy (OSTP) to participate in a newly formed working group tasked with disaggregating government data. This initiative is based in part on concerns that an inability to analyze such data impedes efforts to advance equity.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Soci raises $80 million to power data-driven localized marketing for enterprises

January 22, 2021   Big Data

The 2021 digital toolkit – How small businesses are taking charge

Learn how small businesses are improving customer experience, accelerating quote-to-cash, and increasing security.

Register Now


Soci, a platform that helps brick-and-mortar businesses deploy localized marketing campaigns, has raised $ 80 million in a series D round of funding led by JMI Equity.

The raise comes at a crucial time for businesses, with retailers across the spectrum having to rapidly embrace ecommerce due to the pandemic. However, businesses with local brick-and-mortar stores will still be around in a post-pandemic world. By focusing on their “local” presence, including offering local pages (e.g. Facebook) and reviews (e.g. Google and Yelp), businesses can lure customers away from Amazon and its ilk. This is where Soci comes into play.

Founded in 2012, San Diego-based Soci claims hundreds of enterprise-scale clients, such as Hertz and Ace Hardware, which use the Soci platform to manage local search, reviews, and content across their individual business locations. It’s all about ensuring that companies maintain accurate and consistent location-specific information, which can be particularly challenging for businesses with thousands of outlets.

“For multi-location enterprises, the ability to connect with local audiences across the most influential marketing networks like Google, Yelp, and Facebook was critical to keeping their local businesses afloat through the pandemic,” Soci cofounder and CEO Afif Khoury told VentureBeat.

Moreover, Soci offers analytics that can help determine which locations are performing best in terms of social reach and engagement, integrating with all the usual touchpoints where businesses typically connect to customers, such as Facebook, Yelp, and Google.

“Soci is now housing and analyzing all of the most critical marketing data from every significant local marketing channel, such as search, social, reviews, and ads,” Khoury continued.

Above: Soci: Local marketing data

Soci had previously raised around $ 35 million, and with its latest cash injection the company plans to double down on sales and M&A activity. Its lead investor hints at the direction Soci is taking, given that JMI Equity is largely focused on enterprise software companies like financial planning platform Adaptive Insights, which Workday acquired a few years ago for more than $ 1.5 billion.

Looking to the future, Soci said it plans to enhance its data integrations, spanning all the common business tools used by enterprises, to build a more complete picture that meshes data from the physical and virtual worlds.

“As Soci continues to integrate with other important ecosystems and technologies such as CRM, point-of-sale, and rewards programs, it will begin to effectively combine online and offline data and deliver an extremely robust customer profile that will enrich the insights we provide and enable much more effective marketing and customer service strategies,” Khoury said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Aurora partners with Paccar to develop driverless trucks

January 20, 2021   Big Data

The 2021 digital toolkit – How small businesses are taking charge

Learn how small businesses are improving customer experience, accelerating quote-to-cash, and increasing security.

Register Now


Self-driving startup Aurora today announced a partnership with Paccar to build and deploy autonomous trucks. It’s Aurora’s first commercial application in trucking, and the company says it will combine its engineering teams around an “accelerated development program” to create driverless-capable trucks starting with the Peterbilt 579 and the Kenworth T680.

Some experts predict the pandemic will hasten adoption of autonomous vehicles for delivery. Self-driving cars, vans, and trucks promise to minimize the risk of spreading disease by limiting driver contact. This is particularly true with regard to short-haul freight, which is experiencing a spike in volume during the outbreak. The producer price index for local truckload carriage jumped 20.4% from July to August, according to the U.S. Bureau of Labor Statistics, most likely propelled by demand for short-haul distribution from warehouses and distribution centers to ecommerce fulfillment centers and stores.

Aurora — which recently acquired Uber’s Advanced Technologies Group, the ride-hailing company’s driverless vehicle division, reportedly for around $ 4 billion — says it will work with Paccar to create an “expansive” plan for future autonomous trucks. Aurora and Paccar plan to work closely on “all aspects of collaboration,” from component sourcing and vehicle technology enhancements to the integration of the Peterbilt and Kenworth vehicles with Aurora’s hardware, software, and operational services.

 Aurora partners with Paccar to develop driverless trucks

Aurora will test and validate the driverless Peterbilt and Kenworth trucks at Paccar’s technical center in Mt. Vernon, Washington, as well as on public roads. The companies expect them to be deployed in North America within the next several years, during which time Paccar and Aurora will evaluate additional collaboration opportunities with Peterbilt, Kenworth, and DAF truck models and geographies.

Aurora, which was cofounded by Chris Urmson, one of the original leaders of the Google self-driving car project that became Waymo, has its sights set on freight delivery for now. In January, Aurora said that after a year of focusing on capabilities including merging, nudging, and unprotected left-hand turns, its autonomous system — the Aurora Driver, which has been integrated into six different types of vehicles to date, including sedans, SUVs, minivans, commercial vans, and freight trucks — can perform each seamlessly, “even in dense urban environments.” More recently, Aurora, which recently said it has over 1,600 employees, announced it will begin testing driverless vehicles, including semi trucks, in parts of Texas.

Last year, Aurora raised investments from Amazon and others totaling $ 600 million at a valuation of over $ 2 billion, a portion of which it spent to acquire lidar sensor startup Blackmore. (Lidar, a fixture on many autonomous vehicles designs, measures the distance to target objects by illuminating them with laser light and measuring the reflected pulses.) Now valued at $ 10 billion, Pittsburgh-based Aurora has committed to hiring more workers, with a specific focus on mid- to senior-level engineers in software and infrastructure, robotics, hardware, cloud, and firmware. The AGT purchase could grow the size of its workforce from around 600 to nearly 1,200, accounting for ATG’s roughly 1,200 employees.

Paccar, which was founded in 1905, is among the largest manufacturers of medium- and heavy-duty trucks in the world. The company engages in the design, manufacture, and customer support of light-, medium- and heavy-duty trucks under the Kenworth, Peterbilt, Leyland Trucks, and DAF nameplates.

The value of goods transported as freight cargo in the U.S. was estimated to be about $ 50 billion each day in 2013. And the driverless truck market — which is anticipated to reach 6,700 units globally after totaling $ 54.23 billion in 2019 — stands to save the logistics and shipping industry $ 70 billion annually while boosting productivity by 30%. Besides promised cost savings, the growth of trucking automation has been driven by a shortage of drivers. In 2018, the American Trucking Associations estimated that 50,000 more truckers were needed to close the gap in the U.S., despite the sidelining of proposed U.S. Transportation Department screenings for sleep apnea.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

solve for variable in iterator limit

January 19, 2021   BI News and Info

 solve for variable in iterator limit

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

Database trends: Why you need a ledger database

January 18, 2021   Big Data
 Database trends: Why you need a ledger database

The 2021 digital toolkit – How small businesses are taking charge

Learn how small businesses are improving customer experience, accelerating quote-to-cash, and increasing security.

Register Now


The problem: The auto dealer can’t sell the car without being paid. The bank doesn’t want to loan the money without insurance. The insurance broker doesn’t want to write a policy without payment. The three companies need to work together as partners, but they can’t really trust each other.

When businesses need to cooperate, they need a way to verify and trust each other. In the past, they traded signed and sealed certificates. Today, you can deliver the same assurance with digital signatures, a mathematical approach that uses secret keys to let people or their computers validate dates. Ledger databases are a new mechanism for marrying data storage with some cryptographic guarantees.

The use cases

Any place where people need to build a circle of trust is a good place to deploy a ledger database.

  • Crypto currency like Bitcoin inspired the application by creating a software tool for tracking the true owner of every coin. The blockchain run by the nodes in the Bitcoin network is a good example of how signatures can validate all transactions changing ownership.
  • Shipping companies need to track goods as they flow through a network of trucks, ships, and planes. Loss and theft can be minimized if each person along the way explicitly transfers control.
  • Manufacturers, especially those that create products like pharmaceuticals, want to make sure that no counterfeits enter the supply chain.
  • Coalitions, especially industry groups, that need to work together while still competing. The ledger database can share a record of the events while providing some assurance that the history is accurate and unchanged.

The solution

Standard databases track a sequence of transactions that add, delete, or change entries. Ledger databases add a layer of digital signatures for each transaction so that anyone can audit the list and see that it was constructed correctly. More importantly, no one has gone back to adjust a previous transaction, to change history so to speak.

The digital signatures form a chain that links the individual rows or entries. Each signature is constructed to certify the data in the new row and also the data in the previous row. Taken together, all of the signatures added over time certify the sequence that data was added to the log. An auditor can look at some or all of the signatures to make sure they’re correct.

In the case of Bitcoin, the database tracks the flow of every coin over time since the system was created. The transactions are grouped together in blocks that are processed about every ten minutes, and taken together, the chain of these blocks provides a history of the owner of every coin.

Bitcoin also includes an elaborate consensus protocol where anyone can compete to solve a mathematical puzzle and validate the next block on the chain. This ritual is often called “mining” because the person who solves this computational puzzle is rewarded with several coins. The protocol was designed to remove the need for central control by one trusted authority — an attractive feature for some coin owners. It is open and offers a relatively clear mechanism for resolving disputes.

Many ledger databases avoid this elaborate ritual. The cost of competing to solve these mathematical puzzles is quite high because of the energy that computers consume while they’re solving the puzzle. The architects of these systems just decide at the beginning who will be the authority to certify the changes. In other words, they choose the parties that will create the digital signatures that bless each addition without running some competition each step.

In the example from the car sales process, each of the three entities may choose to validate each other’s transactions. In some cases, the database vendor also acts as an authority in case there are any external questions.

The legacy players

Database vendors have been adding cryptographic algorithms to their products for some time. All of the major companies, like Oracle or Microsoft, offer mechanisms for encrypting the data to add security and offer privacy. The same toolkits include algorithms that can add digital signatures to each database row. In many cases, the features are included in the standard licenses, or can be added for very little cost.

The legacy companies are also adding explicit features that simplify the process. Oracle, for instance, added blockchain tables to version 21c of its database. They aren’t much different from regular tables, but they only support inserting rows. Each row is pushed through a hash function, and then the result from the previous row is added as a column to the next row that’s inserted. Deletions are tightly controlled.

The major databases also tend to have encryption toolkits that can be integrated to achieve much the same assurance. One approach with MySQL adds a digital signature to the rows. It is often possible to adapt an existing database and schema to become a ledger database by adding an extra field to each row. If the signature of the previous row is added to the new row, a chain of authentication can be created.

The upstarts

There are hundreds of startups exploring this space. Some are tech companies that are approaching the ledger database space like database developers. You could think of some others as accidental database creators.

It is a bit of a reach to include all of the various crypto currencies as ledger databases in this survey, but they are all managing distributed blockchains that store data. Some, like Ethereum, offer elaborate embedded processing that can create arbitrary digital contracts. Some of the people who are nominally buying a crypto coin as an asset are actually using the purchase to store data in the currency’s blockchain.

The problem for many users is that the cost of storing data depends on the cost of creating a transaction, and in most cases, these can be prohibitive for regular applications. It might make sense for special transactions that are small enough, rare enough, and important enough to need the extra assurance that comes from a public blockchain. For this reason, most of the current users tend to be speculators or people who want to hold the currency, not groups that need to store a constant volume of bits.

Amazon is offering the Quantum Ledger Database, a pay-as-you-go service with what the company calls an “SQL-like API”. All writes are cryptographically sealed with the SHA-256 hash function, allowing any auditor to go through the history to double-check the time of all events. The pricing is based upon the volume of data stored, the size of any indices built upon the data, and the amount that leaves. (It’s worth noting that the word “quantum” is just a brand name. It does not imply that a quantum computer is involved.)

The Hyperledger Fabric is a tool that creates a lightly interconnected version of the blockchain that can be run inside of an organization and shared with some trusted partners. It’s designed for scenarios where a few groups need to work together with data that isn’t shared openly. The code is an open source constellation of a number of different programs, which means that it’s not as easy to adopt as a single database. IBM is one company that’s offering commercial versions, and many of the core routines are open source.

Microsoft’s Blockchain service is more elaborate. It’s designed to support arbitrary digital contracts, not just store some bits. The company offers both a service to store the data and a full development platform for creating an architecture that captures your workflow. The contracts can be set up either for your internal teams or across multiple enterprises to bind companies in a consortium.

BigchainDB is built on the MongoDB NoSQL model. Any MongoDB query will work. The database will track the changes and share them with a network of nodes that will converge upon the correct value. The consensus-building algorithms can survive failed nodes and recover.

Is there anything a ledger can’t do?

Because it’s just a service for storing data, any bits that might be stored in a traditional database can be stored in a ledger database. The cost of updating the cryptographic record for each transaction, though, may not be worth it for many high-volume applications that don’t need the extra assurance. Adding the extra digital signature requires more computation. It’s not a significant hurdle for low-volume tables like a bank account where there may be only a few transactions per day. The need for accuracy and trust far outweigh the costs. But it could be prohibitive for something like a log file of high-volume activity that has little need for assurance. If some fraction of a social media chat application disappeared tomorrow, the world would survive.

The biggest question is just how important it will be to trust the historical record in the future. If there’s only a slim chance that someone might want to audit the transaction journal, then the extra cost of computing the signatures or the hash values may not be worth it.

This article is part of a series on enterprise database technology trends.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Incoming White House science and technology leader on AI, diversity, and society

January 18, 2021   Big Data
 Incoming White House science and technology leader on AI, diversity, and society

The 2021 digital toolkit – How small businesses are taking charge

Learn how small businesses are improving customer experience, accelerating quote-to-cash, and increasing security.

Register Now


Technologies like artificial intelligence and human genome editing “reveal and reflect even more about the complex and sometimes dangerous social architecture that lies beneath the scientific progress that we pursue,” Dr. Alondra Nelson said today at a televised ceremony introducing President-elect Joe Biden’s science team. On Friday, the Biden transition team appointed Nelson to the position of OSTP deputy director for science and society. Biden will be sworn in Wednesday to officially become the 46th president of the United States.

Nelson said in the ceremony that science is a social phenomenon and a reflection of people, their relationships, and their institutions. This means it really matters who’s in the room when new technology like AI is developed, she said. This is also why for much of her career she has sought to understand the perspectives of people who are not typically included in the development of emerging technology. Connections between our scientific and social worlds have never been as urgent as they are today, she said, and there’s never been a more important moment to situate scientific development in ethical values like equality, accountability, justice, and trustworthiness.

“When we provide inputs to the algorithm; when we program the device; when we design, test, and research; we are making human choices, choices that bring our social world to bear in a new and powerful way,” she said. “As a Black woman researcher, I am keenly aware of those who are missing from these rooms. I believe we have a responsibility to work together to make sure that our science and technology reflects us, and when it does it reflects all of us, that it reflects who we truly are together. This too is a breakthrough. This too is an innovation that advances our lives.”

Nelson’s comments allude to trends of pervasive algorithmic bias and a well-documented lack of diversity among teams deploying artificial intelligence. Those trends appear to have converged when Google fired AI ethics co-lead Timnit Gebru last month. Algorithmic bias has been shown to disproportionately and negatively impact the lives of Black people in a number of ways, including use of facial recognition leading to false arrests, adverse health outcomes for millions, and unfair lending practices. A study published last month found that diversity on teams developing and deploying artificial intelligence is a key to reducing algorithmic bias.

Dr. Eric Lander will be nominated to serve as director of the OSTP and presidential science advisor. In remarks today, he called America’s greatest asset its “unrivaled diversity” and spoke of science and tech policy that creates new industries and jobs but also ensures benefits of progress are “shared broadly among all Americans.”

“Scientific progress is about someone seeing something that no one’s ever seen before because they bring a different lens, different experiences, different questions, different passions. No one can top America in that regard, but we have to ensure that everyone not only has a seat at the table, but a place at the lab bench,” he said.

Biden also spoke at the ceremony, referring to the team he has assembled as one that will help “restore America’s hope in the frontier of science” while tackling advances in health care and challenges like climate change.

“We have the most diverse population in the world that’s in a democracy, and there’s so much we can do. I can’t tell you how excited we’ve been about doing this. We saved it for last. I know it’s not naming Department of Defense or attorney general, but I tell you what: You have more impact on what our children are going to face and our grandchildren are going to have opportunities to do than anyone,” he said.

As part of today’s announcement, Biden said the presidential science advisor will be a cabinet-level position for the first time in U.S. history. Vice President-elect Kamala Harris, whose mother worked as a scientist at UC Berkeley, also spoke. She concluded her remarks with an endorsement of funding for science, technology, engineering, and mathematics (STEM) education and an acknowledgment of Dr. Kizzmekia Corbett, a Black female scientist whose contributions helped create the Moderna COVID-19 vaccine.

The Biden-Harris campaign platform has also pledged to address some forms of algorithmic bias. While the Trump administration signed a few international agreements supporting trustworthy AI, the current president’s harsh immigration policy and bigoted rhetoric undercut any chance of leadership when it comes to addressing the ways algorithmic bias leads to discrimination or civil rights violations.

Earlier this week, members of the Trump administration introduced the AI Initiatives Office to guide a national AI strategy following the passage of the National Defense Authorization Act (NDAA). The AI Initiatives Office might be one of the only federal offices to depict a neural network and eagle in its seal.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • Switch from Old Record View to Kanban Board View to Maximize Business Productivity within Dynamics 365 CRM / PowerApps
    • PUNNIES
    • Cashierless tech could detect shoplifting, but bias concerns abound
    • Misunderstood Loyalty
    • Pearl with a girl earring
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited