• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Public

New York City Council votes to prohibit businesses from using facial recognition without public notice

December 11, 2020   Big Data
 New York City Council votes to prohibit businesses from using facial recognition without public notice

The cutting-edge computer architecture that’s changing the AI game

Learn about the next-gen architecture needed to unlock the true capabilities of AI and machine learning.

Register here

New York City Council today passed a privacy law for commercial establishments that prohibits retailers and other businesses from using facial recognition or other biometric tracking without public notice. If signed into law by NYC Mayor Bill de Blasio, the bill would also prohibit businesses from being able to see biometric data for third parties.

In the wake of the Black Lives Matter movement, an increasing number of cities and states have expressed concerns about facial recognition technology and its applications. Oakland and San Francisco, California and Somerville, Massachusetts are among the metros where law enforcement is prohibited from using facial recognition. In Illinois, companies must get consent before collecting biometric information of any kind, including face images. New York recently passed a moratorium on the use of biometric identification in schools until 2022, and lawmakers in Massachusetts have advanced a suspension of government use of any biometric surveillance system within the commonwealth. More recently, Portland, Maine approved a ballot initiative banning the use of facial recognition by police and city agencies.

The New York City Council bill, which was sponsored by Bronx Councilman Ritchie Torre, doesn’t outright ban the use of facial recognition technologies by businesses. However, it does impose restrictions on the ways brick-and-mortar locations like retailers, which might use facial recognition to prevent theft or personalize certain services, can deploy it. Businesses that fail to post a warning about collecting biometric data must pay $ 500. Businesses found selling data will face fines of $ 5,000.

In this aspect, the bill falls short of Portland, Oregon’s recently-passed ordinance regarding biometric data collection, which bans all private use of biometric data in places of “public accommodation,” including stores, banks, restaurants, public transit stations, homeless shelters, doctors’ offices, rental properties, retirement homes, and a variety of other types of businesses (excepting workplaces). It’s scheduled to take effect starting January 1, 2021.

“I commend the City Council for protecting New Yorkers from facial recognition and other biometric tracking. No one should have to risk being profiled by a racist algorithm just for buying milk at the neighborhood store,” Fox Cahn, executive director of the Surveillance Technology Oversight Project, said. “While this is just a first step towards comprehensively banning biometric surveillance, it’s a crucial one. We shouldn’t allow giant companies to sell our biometric data simply because we want to buy necessities. Far too many companies use biometric surveillance systems to profile customers of color, even though they are biased. If companies don’t comply with the new law, we have a simple message: ‘we’ll see you in court.’”

Numerous studies and VentureBeat’s own analyses of public benchmark data have shown facial recognition algorithms are susceptible to bias. One issue is that the data sets used to train the algorithms skew white and male. IBM found that 81% of people in the three face-image collections most widely cited in academic studies have lighter-colored skin. Academics have found that photographic technology and techniques can also favor lighter skin, including everything from sepia-tinged film to low-contrast digital cameras.

“Given the current lack of regulation and oversight of biometric identifier information, we must do all we can as a city to protect New Yorkers’ privacy and information,” said Councilman Andrew Cohen, who chairs the Committee on Consumer Affairs. Crain’s New York reports that the committee voted unanimously in favor of advancing Torres’ bill to the full council hearing earlier this afternoon.

The algorithms are often misused in the field, as well, which tends to amplify their underlying biases. A report from Georgetown Law’s Center on Privacy and Technology details how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects. The New York Police Department and others reportedly edit photos with blur effects and 3D modelers to make them more conducive to algorithmic face searches. And police in Minnesota have been using biometric technology from vendors including Cognitec since 2018, despite a denial issued that year, according to the Star Tribune.

Amazon, IBM, and Microsoft have self-imposed moratoriums on the sale of facial recognition systems. But some vendors, like Rank One Computing and Los Angeles-based TrueFace, are aiming to fill the gap with customers, including the City of Detroit and the U.S. Air Force.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Sonrai Security raises $20 million to protect public clouds with automation

October 15, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

Public cloud security provider Sonrai Security today announced a $ 20 million round that will be used to accelerate R&D and boost global sales and marketing for its identity and data governance products.

Roughly 83% of enterprise workloads have moved to the cloud, according to a 2020 survey from LogicMonitor. But the cloud remains vulnerable to cyberattacks. IBM found last year that the average time to identify a breach was 206 days. Meanwhile, security breaches have increased by 11% since 2018 and 67% since 2014, Accenture reported in a recent study.

Sonrai offers a platform called Sonrai Dig to help companies stay ahead of threats. The platform is built on a graph that identifies and monitors relationships between entities (e.g., admins, roles, compute instances, serverless functions, and containers) and data within public clouds and third-party data stores. An engine automates workflow, remediation, and prevention across cloud and security teams to provide baseline security. At the same time, it provides critical data in object stores like AWS S3 and Azure Blog priority to address suspicious activity and access rights monitoring.

 Sonrai Security raises $20 million to protect public clouds with automation

Sonrai’s data governance automation solution helps integrate teams via analyses, alerts, and actions that align with the way organizations use the public cloud. The platform allows customized monitoring and views for development, staging, and production workloads and an API architecture that can be integrated into a continuous integration/continuous development process. Dig also automatically dispatches prevention and remediation bots and provides safeguards in the form of code promotion blocks to provide end-to-end security in public cloud platforms.

Sonrai CEO Brendan Hannigan, formerly a general manager of security at IBM, says improperly configured cloud interdependencies and inheritances can lead to significant security risks. These include excessive access paths to data, over-permissioned identities, and an unwieldy separation of responsibilities. Hannigan estimates that enterprises’ public cloud utilization generates hundreds of cloud accounts, thousands of data stores, and tens of thousands of ephemeral pieces of compute — complexity that legacy cloud security tools have failed to address.

Hannigan won’t disclose Sonrai’s customers, but he says the platform can scale to thousands of roles and compute instances across hundreds of corporate accounts. “The increasing frequency of cloud breaches caused by identity and data access complexity has driven significant traction for our … platform among large enterprises,” Hannigan added. “They see it as the basis of their cloud security model.”

Menlo Ventures led today’s series B round, with participation from Polaris Partners and Ten Eleven Ventures. It brings the New York-based company’s total raised to over $ 38.5 million, following an $ 18.5 million series A round in January 2019.

Sign up for Funding Weekly to start your week with VB’s top funding stories.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Amsterdam and Helsinki launch algorithm registries to bring transparency to public deployments of AI

September 28, 2020   Big Data
 Amsterdam and Helsinki launch algorithm registries to bring transparency to public deployments of AI

Amsterdam and Helsinki today launched AI registries to detail how each city government uses algorithms to deliver services, some of the first major cities in the world to do so. An AI Register for each city was introduced in beta today as part of the Next Generation Internet Policy Summit, organized in part by the European Commission and the city of Amsterdam. The Amsterdam registry currently features a handful of algorithms, but it will be extended to include all algorithms following the collection of feedback at the virtual conference to lay out a European vision of the future of the internet, according to a city official.

Each algorithm cited in the registry lists datasets used to train a model, a description of how an algorithm is used, how humans utilize the prediction, and how algorithms were assessed for potential bias or risks. The registry also provides citizens a way to give feedback on algorithms their local government uses and the name, city department, and contact information for the person responsible for the responsible deployment of a particular algorithm. A complete algorithmic registry can empower citizens and give them a way to evaluate, examine, or question governments’ applications of AI.

In a previous development in the U.S., New York City created an automated decision systems task force in 2017 to document and assess city use of algorithms. At the time it was the first city in the U.S. to do so. However, following the release of a report least year, commissioners on the task force complained about a lack of transparency and inability to access information about algorithms used by city government agencies.

The recently released 2020 Government AI Readiness Index ranks Holland and Finland among some of the most prepared nations in the world to adopt applications of AI. A McKinsey Global Institute AI Readiness Index report found similar results. Government officials in many parts of the world are joining business leaders in increasing the number of use cases where AI is applied to automate tasks or make decisions, but it comes with some risk. Poor performance or algorithmic bias can lead to loss in trust between governments and citizens, cautioned a joint Stanford-NYU study analyzing how the U.S. government deploys AI systems.

Some top recent examples: A Dutch court ordered government officials to stop using the SyRI algorithm due to discrimination against immigrants and people living in low-income households. As was the case with facial recognition in the U.K. earlier this year, a judge deemed use of the algorithm a violation of human rights. There were also “Fuck the algorithm” protests in Britain this summer following an algorithmic grading scandal that generated better scores to kids from private schools and worse grades to kids from low-income households.

In a statement accompanying the announcement, Helsinki City Data project manager Pasi Rautio said the registry is also aimed at increasing public trust in the kinds of artificial intelligence “with the greatest possible openness.”

The introduction of AI registries in Amsterdam and Helsinki is the latest multinational effort to maintain public trust and harness AI for the good of humanity. Earlier in September, Finland joined 12 other countries, including NATO member states and the U.S., to form the AI Partnership for Defense. Hosted by the U.S. Department of Defense’s Joint AI Center, the coalition will discuss how to translate ethics or policy principles into practice. That same week, EU Commission members began talks with high-level Chinese officials about issues important to the global economy including AI.

Also earlier this month: The European Laboratory for Learning and Intelligent Systems (ELLIS) launched the second wave of its pan-European initiative to accelerate AI research in dozens of cities across the EU, continuing the $ 220 million initiative to keep AI talent in Europe.

In another example of Finland attempting to create a more democratic approach to AI, last December government officials made AI training available with the stated goal of training at least 1% of EU citizens about the fundamentals of AI.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Alphabet revenue dropped in Q2 2020, the first decline since going public

July 31, 2020   Big Data
 Alphabet revenue dropped in Q2 2020, the first decline since going public

VB Transform

Watch every session from the AI event of the year

On-Demand

Watch Now

(Reuters) — Google parent Alphabet’s quarterly sales fell for the first time in its 16 years as a public company, but the decline was less than expected as many advertisers stuck with the most popular online search engine during the pandemic.

Shares of Alphabet fell 1.2% to $ 1,518.85 after it released the second-quarter results. The stock had rebounded early Thursday to this year’s pre-pandemic high of about $ 1,525.

With its mostly free tools for web browsing, video watching and teleconferencing, Google unit has become a larger part of many consumers’ lives during the pandemic as lockdown orders force people to rely on the internet for work and entertainment.

But advertisers on Google have suffered mass layoffs and other cutbacks during the pandemic, and marketing budgets are often the first to get slashed especially by big clients like travel search engines, airlines and hotels.

Google’s ads business has long trended with the broader economy, and the U.S. economy contracted at its steepest pace since the Great Depression in the second quarter, the Commerce Department said on Thursday.

Google appeared to weather the slowdown better than before, as the pandemic has made the internet more attractive to advertisers than TV, radio and other avenues.

“This quarter, we saw the early signs of stabilization, as users returned to commercial activity online,” Alphabet Chief Executive Sundar Pichai told analysts on Thursday. “Of course, the economic climate remains fragile.”

Alphabet’s overall second-quarter revenue was $ 38.3 billion, down 2% from the year-ago period. Analyst’s tracked by Refinitiv, on average, had estimated a 4% decline to $ 37.367 billion.

The sales decline was the first since the company went public in 2004 and the worst performance since its 2.9% growth during the Great Recession in 2009.

About 66% of Alphabet’s revenue came from Google search and YouTube ads, 12% from ads sold on partner properties online, 8% from its cloud business and 14% from its mobile app store and about a dozen other smaller businesses.

It has adjusted by slowing expense growth. Alphabet’s total costs and expenses rose about 7% from a year ago to $ 31.9 billion in the second quarter, compared with a 12% jump a quarter ago.

Alphabet’s quarterly profit was $ 6.96 billion, or $ 10.13 per share, compared with the analysts’ average estimate of $ 5.645 billion, or $ 8.29 per share.

New data privacy laws, including one that went into effect this month in Google’s home state of California, are also depressing ad prices.

Antitrust regulators in countries across the Americas, Europe and Asia are weighing whether Google has stifled competition on its way to dominating search, mobile software and other businesses, with some bodies even considering forcing it divest parts of its ad operations.

About 2,000 employees last month petitioned Google’s emerging cloud business to scuttle deals with some police agencies, citing racial discrimination concerns. And whether a massive hiring spree will win other cloud clients is uncertain.

Investors may be shifting toward less ad-reliant rivals. Entering Thursday, Amazon and Microsoft, which have smaller ad businesses than Google but bigger cloud units, were trading at 145 times and 35 times their respective earnings over the last 12 months. Alphabet shares were at 30 times earnings over the last year.

(Reporting by Paresh Dave in Oakland, Calif. and Munsif Vengattil in Bengaluru; Editing by Shailesh Kuber and Richard Chang)

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Using a Public Web Service to Consume SQL Server Build Numbers

June 18, 2020   BI News and Info

During my time as a professional DBA, I have been responsible for quite a few SQL Server instances, all with their own set of characteristics, including different versions and patching levels, the topic of this article.

One of the primary duties that DBAs have is making sure that the SQL Server instances are patched and up to date, as Microsoft makes available all the corresponding Service Packs and/or Cumulative Updates. This responsibility, of course, applies for the on-premises deployments of SQL Server. Microsoft manages most database services for you in Azure, however, you will still be responsible for keeping Azure Virtual Machines up to date.

With this said, how can you make sure that your set of SQL Server instances are patched up to the latest available release by Microsoft? One obvious way is to manually check the current releases of SQL Server and compare that list against what you currently have, making a note of the current status of your environment and pointing out any differences. However, doing this can consume a lot of time, especially if you have quite a handful of instances to check.

I found that I wanted to know if my instances were up to date, but, eventually, I didn’t have enough time on my hands to be constantly checking for the available updates. I previously created and tried a solution that consumed and parsed a website to gather the information about the SQL Server build numbers. However, I decided to put that solution to rest because I realized that I don’t want to depend on the availability of an external site to keep things working fine for me.

After failing at many attempts to find a solution to automate this effort, I decided to build a public service that can surely help any SQL Server DBA fulfil this important duty.

Choosing the technology to build the public service

After several hours of thinking, I chose Azure (a solid decision by the way) by combining two of their “serverless” offerings, to help me reduce the overall costs. This article in no way is a deep dive into the technologies picked, so with that out of the way, let me explain why I picked Azure Serverless Functions and Azure SQL Database Serverless.

One of my first options was to spin a Virtual Machine, install a web server, a database, point a custom domain to the public IP assigned to the Virtual machine, and develop the service. However, by going this route, even if there’s no activity going on in the server, you still have to pay a minimum amount for the storage and virtual network assigned to your Virtual Machine.

With the serverless options, you can get a cost-effective and very convenient solution simply by paying for it when your stuff is really used.

Azure SQL Database Serverless

Nowadays, there’s an offering from Microsoft, for your Database-as-a-Service solution, called serverless. A convenient feature from this option is that, if your database hasn’t been used for a continuous amount of time (1hr is the minimum you can pick, up to 7 days), then it will auto-pause itself and, you guessed it, you will only be charged for the storage assigned to your database. Under normal conditions, Microsoft charges for the storage and compute resources used by your Azure SQL Database.

There is one important detail that should be kept in mind, and it is the fact that if your database is in a paused state, and a request tries to hit the database, then it will require some time (usually it’s several seconds) for it to “wake up” and serve the request. Therefore, there might be times where it seems that your service is slow, but it is very likely that it is just the database “waking up”. You can find more information here.

Azure Serverless Functions

Azure functions are an excellent option for quickly developing microservices, without worrying about the underlying infrastructure that powers them to run; hence the addition of the term serverless. It doesn’t mean that it doesn’t require a server to run behind the scenes. There are different service plans and configurations for your functions, but the convenient part for me is that there are free grants covered by month, and so far, I haven’t spent a single dime in Azure functions.

You can find more information here.

Details and usage of the public service

Before detailing the structure and usage of the service, I would like to express one important fact, and it is that, as of the time of this writing, the usage of this public service is entirely free for the end-user. I am personally financing the resources described in Azure (even if it’s a tiny bit currently) and will continue to do so for the foreseeable future unless something prevents me from doing so.

To consume the service, you have to issue an HTTP request, either through a web browser or programmatically through a script, in order to get the json response with the data. As of this writing, there is no restriction upon who can consume this service; however, this can eventually change if any maliciousness is detected, such as trying to bring the service down.

NOTE: You have to be 100% sure that the machine from where you trigger the request has internet access. It might be an obvious thing, but I have seen cases where the service seems to be failing, and it is just that extremely simple detail.

Here is the structure of the URL:

http://www.sqlserverbuilds.info/api/builds/{version:regex(2019|2017|2016|2014|2012|2008R2|2008|all)}/{kind:regex(first|latest|all)}

As you can see, there are two sections of the URL within curly brackets {}. The first will tell the service about the information that the user is actually targeting:

{ version:regex(2019 | 2017 | 2016 | 2014 | 2012 | 2008R2 | 2008 | all) }

In here you specify, as a parameter, the particular SQL Server version for which you wish to know the released/available build numbers. If you specify all, then all the build numbers I have collected in the database are going to be returned. I have only populated the database with build numbers starting from SQL Server 2008. I know that there are many systems out there still running SQL Server 2005 and below, but I just thought that SQL Server 2008 would be a good starting point; perhaps in a future revision/release of this project, I might add even older versions.

The next part looks like this:

{ kind:regex(first | latest | all) }

In here you specify, as a parameter, how granular you want the information to be returned by the service.

First: tells the service to return only the very first build number for the specified SQL Server version.

Latest: tells the service to return only the latest build number for the specified SQL Server version.

All: tells the service to return all the build numbers found for the specified SQL Server version.

Output Examples:

Here are some examples of the call to the service and the results. Note, as I stated earlier, if you experience either a blank json object as a response or general slowness overall, it means that the database was in a paused state and it is “waking up”.

Retrieving the first build number for SQL Server 2019.

URL: http://www.sqlserverbuilds.info/api/builds/2019/first

Result: [{ “sp” : “RTM”, “build_number” : “15.0.2000.5”, “release_date” : “2019-11-04″ }]

Retrieving the latest build number for SQL Server 2019.

URL: http://www.sqlserverbuilds.info/api/builds/2019/latest

Result: [{ “sp” : “RTM”,“cu” : “CU4″, “build_number” : “15.0.4033.1”, “release_date” : “2020-03-31″ }]

Retrieving all the build numbers for SQL Server 2019.

URL: http://www.sqlserverbuilds.info/api/builds/2019/all

Result: [

{ “sp” : “RTM”, “cu” : “CU4″, “build_number” : “15.0.4033.1”, “release_date” : “2020-03-31″ },

{ “sp” : “RTM”, “cu” : “CU3″, “build_number” : “15.0.4023.6”, “release_date” : “2020-03-12″ },

{ “sp” : “RTM”, “cu”:“CU2″, “build_number” : “15.0.4013.40”, “release_date” : “2020-02-13″ },

{ “sp” : “RTM”, “cu” : “CU1″, “build_number” : “15.0.4003.23”, “release_date” : “2020-01-07″ },

{ “sp” : “RTM”, “extra” : “GDR”, “build_number” : “15.0.2070.41”, “release_date” : “2019-11-04″ },

{ “sp” : “RTM”, “build_number” : “15.0.2000.5”, “release_date” : “2019-11-04″ }

]

Retrieving the first build number of all the SQL Server versions stored in the database.

URL: http://www.sqlserverbuilds.info/api/builds/all/first

Result: [

{ “sp” : “RTM”, “build_number” : “15.0.2000.5”, “release_date” : “2019-11-04″ },

{ “sp” : “RTM”, “build_number” : “14.0.1000.169”, “release_date” : “2017-10-02″ },

{ “sp” : “RTM”, “build_number” : “13.0.1601.5”, “release_date” : “2016-06-01″ },

{ “sp” : “RTM”, “build_number” : “12.0.2000.8”, “release_date” : “2014-04-01″ },

{ “sp” : “RTM”, “build_number” : “11.0.2100.60”, “release_date” : “2012-03-06″ },

{ “sp” : “RTM”, “build_number” : “10.50.1600.1”, “release_date” : “2010-04-21″ },

{ “sp” : “RTM”, “build_number” : “10.0.1600.22”, “release_date” : “2008-08-07″ }

]

Retrieving the latest build number of all the SQL Server versions stored in the database.

URL: http://www.sqlserverbuilds.info/api/builds/all/first

Result: [

{ “sp” : “RTM”, “cu” : “CU4″, “build_number” : “15.0.4033.1”, “release_date” : “2020-03-31″ },

{ “sp” : “RTM”, “cu” : “CU20″, “build_number” : “14.0.3294.2”, “release_date” : “2020-04-07″ },

{ “sp” : “SP2″, “cu” : “CU12″, “build_number” : “13.0.5698.0”, “release_date” : “2020-02-25″ },

{ “sp” : “SP3″, “cu” : “CU4″, “extra” : “CVE”, “build_number” : “12.0.6372.1”, “release_date” : “2020-02-11″ },

{ “sp” : “SP4″, “extra” : “CVE”, “build_number” : “11.0.7493.4”, “release_date” : “2020-02-11″ },

{ “sp” : “SP3″, “extra” : “GDR”, “build_number” : “10.50.6560.0”, “release_date” : “2018-01-06″ },

{ “sp” : “SP4″, “extra” : “GDR”, “build_number” : “10.0.6556.0”, “release_date” : “2018-01-06″ }

]

Structure of the JSON response

Once you get back the JSON response, you’ll need to interpret the information:

[

   {

      “sp” : “RTM”,

      “cu” : “CU4″,

      “build_number” : “15.0.4033.1”,

      “release_date” : “2020-03-31″

   }

]

sp: The Service Pack level of the build number

  • From SQL Server 2017 and up, this will always have “RTM” as Microsoft shifted to Cumulative only releases.

cu: The Cumulative Update level of the build number.

  • When this field doesn’t come in a particular response, it means that the build number is in its base RTM/SP level without its first Cumulative Update.

build_number: The actual build number of the specific release.

release_date: The date when Microsoft release the specific build number to the public.

  • Sometimes, there are rare cases where Microsoft pulls a particular build number from public availability (due to bugs, errors reported). When I find cases like these, I usually pull them from the database as well.

extra: When this field appears in a particular response object, it means that the build number is a special case release, either a General Distribution Release, a Hotfix, or an On-Demand update.

Bonus script to interact with the public service

Since the spirit of this public service is to allow the fellow DBAs to programmatically consume the service, let me leave you a PowerShell script that you can use as a “stepping stone” for you own particular use case.

Something that has been very helpful to me (and it might be to you as well) is the use of this service to store the build numbers information in a central repository that I can keep up-to-date and use it to determine if my list of instances are up-to-date. Of course, you would have to craft that solution and apply some sort of automation to it.

Code

1

2

3

4

5

6

7

8

9

10

11

$ response = Invoke-WebRequest -URI http://www.sqlserverbuilds.info/api/builds/2019/first -UseBasicParsing

$ json = ConvertFrom-Json $ ([String]::new($ response.Content))

#This means that an option that targets multiple build numbers was sent

if($ json.length -gt 1){

    foreach($ item in $ json){

        $ item

    }

}

else{

    $ json

}

Output Examples

Fetching one build number for a particular SQL Server version.

 Using a Public Web Service to Consume SQL Server Build Numbers

Fetching all the build numbers from a particular SQL Server version.

 Using a Public Web Service to Consume SQL Server Build Numbers

Fetching all the build numbers stored in the database.

 Using a Public Web Service to Consume SQL Server Build Numbers

Conclusion

I really hope that this personal initiative can be valuable to any SQL Server DBA out there facing the same situation that I once faced. Keep in mind that there is a chance that you might find errors while attempting to consume the service, and it wouldn’t be that surprising as the version I’m presenting within this article is v1.0.

I personally will be updating the database with every new release that Microsoft makes public, and any ideas, comments, suggestions, complaints will always be welcome in favor of improving this service in any possible way, so feel free to drop a comment and I will try my best to address it.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Michael Che Of ‘Saturday Night Live’ To Pay Rent For 160 New York Public Housing Residents

April 16, 2020   Humor
 Michael Che Of ‘Saturday Night Live’ To Pay Rent For 160 New York Public Housing Residents

“Saturday Night Live” star Michael Che has committed to paying one month’s rent for 160 residents of a New York City public housing building as a tribute to his grandmother, who died last week of coronavirus-related complications.

Che disclosed his decision on Instagram in a post that called on New York City Mayor Bill de Blasio and others to find a solution to the rent crisis that many New Yorkers are facing amid the widespread unemployment caused by the COVID-19 lockdown. Che disclosed the death of his grandmother last week when “SNL” returned to the airwaves with its first-ever remote production. Che will cover the rent for other residents in the building where she lived, to honor her memory and make a statement about the rent crisis in the city.

“It’s crazy to me that residents of public housing are still expected to pay their rent when so many New Yorkers can’t even work,” Che wrote on Instagram. Of his decision to cover rent for residents of his grandmother’s building, Che wrote: “I know that’s just a drop in the bucket. So I really hope the city has a better plan for debt forgiveness for all the people in public housing, AT THE VERY LEAST.”

Che closed his post with a “P.S.” addressed to de Blasio, New York Gov. Andrew Cuomo and hip hop mogul Sean Combs. “De Blasio! Cuomo! Diddy! Let’s fix this! Page me!”

Che did not specify the building or elaborate on the details of his grandmother’s death. NBC’s “Today” identified Che’s grandmother’s as Martha.

Che is co-anchor with Colin Jost of “SNL’s” “Weekend Update” segment and co-head writer of the show. He’s been a writer for “SNL” since 2013 and on air with the show since 2014. Che and Jost co-hosted the Primetime Emmy Awards telecast in 2018.

Source: Variety

Share this:

Like this:

Like Loading…

Rashida Jones To Star In New Animated Comedy For Quibi Titled ‘Filthy Animals’

Let’s block ads! (Why?)

The Humor Mill

Read More

The Intelligent Public Sector Organization: Three Ways Forward

March 14, 2020   SAP

Public sector organizations are experiencing tremendous change. While the core mission to protect the community, provide services, and help the economy prosper remains firmly in place, rising constituent expectations for service, convenience, and data protection are putting organizations to the test.

IndustrySummit PublicSector BannerImage 250x250 The Intelligent Public Sector Organization: Three Ways Forward

Much is at stake, and failure can cause real harm: unsafe or unhealthy communities, degraded citizen services or lack of services, economic or political irrelevance, and loss of power or reputation.

Keeping pace at a time when constituents expect experiences and interactions modeled on the kind they receive from digital natives is no small task. To continue to fulfill its mission, government must embrace technical, cultural, and organizational change to be more responsive, adaptable, and transparent.

The following strategic priorities for becoming an intelligent enterprise are helping governments to move forward:

Put the citizen at the center

Governments today seek ways to simplify complicated processes and provide more personalized, self-managed services for citizens across all channels. Organizations are employing intelligent technologies with conversational UIs and natural-language processing for better delivery of services. Agencies are becoming service orchestrators and information brokers capable of supporting end-to-end customer journeys across departments.

One result is greater automation that supports the personalization of core services. This frees up employees to focus on the more complex services citizens need. Another result is that organizations are beginning to deliver services proactively without compromising privacy and permission, rather than expecting the citizen to request services.

With two-way authentication, encryption, and blockchain technology, for example, organizations can enable constituents to approve the use of their personal data one time only – thus streamlining their experiences. And by collecting experience data, government agencies can also get a much clearer picture of how citizens engage with them and then make adjustments to more effectively deliver on policy mandates.

Leverage data as an asset

As governments seek to use data to inform decision-making, they must find ways to more effectively share data across their own agencies to develop an integrated picture. By breaking down silos with a consolidated view of data, organizations will be able to share data more freely – and even use it for predictive analytics and simulations. This can improve strategic planning and policy-making.

At the same time, agencies need to prioritize the protection of data – at rest or in transit – from disclosure, modification, or destruction. Key to all of this is a rationalized approach to data management that supports an organization-wide “single source of truth” that integrates inter-organizational and external data.

To this end, organizations are moving toward centralized, easy-to-use data management platforms with intuitive interfaces that simplify access to information and empower stakeholders at all levels to maximize the value of data wherever possible.

Reimagine business processes and models

Government agencies are now moving to redefine their core processes and service delivery models – modernizing legacy systems and laying a digital foundation for data-driven decision-making. Everyday tasks are being automated, enabling employees to focus on the cases that require human engagement.

Some examples include the automation of constituent-facing services (such as social services, call centers, and automobile licensing and registration requests) and internal processes (such as invoice approvals and payment-matching).

Automation, invariably, involves the generation of data – which can then be used by machine learning algorithms to identify patterns. For example, agencies could use such information to optimize tax collection strategies based on individuals’ payment history.

Moving forward, agencies could even establish networks of government and non-government stakeholders that use decentralized ledger technologies such as blockchain to power further process simplification. Gathering data from multiple sources and using advanced analytics will enable the government to intelligently deliver the right services to the right people at the right time.

The future is bright

Increasingly, the most successful public agencies will be those that reimagine their end-to-end business processes using the tools and technologies now widely available. These agencies will shift routine tasks from humans to business systems using intelligent automation to serve citizens better than ever before.

Join our virtual event to learn more

Want to find out more about the latest trends and innovations for today’s public sector organizations? Register today to join our flagship 60-minute virtual industry event:

SAP Industries Forum 2020: Public Sector
Experience the Intelligent Enterprise
Thursday, March 26, 2020
8:00 a.m. PDT, 11:00 a.m. EDT, 4:00 p.m. CET

Register now

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Public comments show lingering problems with California’s data privacy law

December 19, 2019   Big Data
 Public comments show lingering problems with California’s data privacy law

Earlier this month, the California Office of the Attorney General (CAG) held hearings across four cities where the public could offer comments and feedback to lawmakers as part of the rulemaking process for the California Consumer Privacy Act (CCPA). The hearings drew speakers from across a variety of industries, and their oral comments, as well as other written comments sent to the CAG’s office by Friday, December 6, are now available on the California Attorney General’s CCPA page.

While the hearings drew a number of concerns about the new data privacy law, which goes into effect January 1, four core issues emerged.

1. Crucial CCPA terms aren’t clearly defined

The most prominent concern that came out of the hearings was that terms central to the CCPA are unclear, making it difficult for companies to feel fully confident they are in compliance. At the San Francisco hearing alone, speakers said the definitions of personal information (PI) and service provider are unclear, as is what constitutes a sell. Speakers at the Los Angeles hearing made similar comments, adding that other terms like “business,” “reasonable security measures,” and “secure” transmissions of personal information were also unclear.

A common refrain was that the CCPA’s language was too vague or broad and overreaching. As a consequence, organizations have found key sections of the CCPA difficult to operationalize. They worry that the ambiguity of these terms could result in significant unintended consequences. For example, some argued that the broad definitions of PI and business may extend the reach of the CCPA to businesses that the AG likely had no intention of regulating, like small operations that serve fewer than 50,000 California customers but run high-traffic websites using cookies.

2. It’s unclear how CCPA’s scope effects other industry-specific regulation

Several commentators expressed confusion over the CCPA’s scope as it applies to companies that are already subject to industry-specific privacy legislation. At the San Francisco hearing, one speaker, representing a San Francisco credit union, indicated that the Gramm-Leach-Bliley Act (GLBA) and California Financial Information Privacy Act have definitions of PI that differ from the CCPA. She noted, though, that while the CCPA spells out exemptions to PI collected under the GLBA, inconsistencies in the definition of PI between laws have resulted in multiple interpretations about how the CCPA applies to data credit unions collect. Similar confusion may surround other regulations like HIPAA. At the Sacramento hearing, a speaker asked for clarification on how de-identification under the CCPA differs from de-identification under HIPAA, and how any de-identified data exempt from HIPAA should be handled by the CCPA.

3. Smaller organizations will have trouble meeting the January 1 deadline

Given the extensive scope of the CCPA, it’s no surprise that small and medium businesses have expressed concerns about the law’s reach and implications. Some organizations have said publicly that they’ll have substantial difficulty meeting the January 1 compliance deadline. At the San Francisco hearing, two speakers requested the compliance deadline be moved to 2022 to ensure their organizations could build a robust compliance program.

4. The system for data requests could be open to abuse

Speakers at the Los Angeles and San Francisco hearings also raised concerns about the potential for abuse with the request system. For example, they said that if companies were required to take unverified opt-out requests seriously, it could invite mass bot attacks by bad actors, either online or by phone. It’s been argued elsewhere that such abuse could effectively result in data request “denial of service” style attacks against organizations as their staff and infrastructure become tied up in an effort to respond to an unanticipated flood of fake requests. While tools exist to help automate data discovery and responses to data requests, some speakers argued that a “reasonable degree of certainty” should be the standard applied to requests, as that would give businesses more bandwidth to handle the issue.

What happens now?

Now that the hearings and the public comment period have passed, the CAG may use comments to revise the current draft regulations, after which the public will have 15 days (or longer) to provide comments on the revisions. So even though the CCPA goes into effect January 1, 2020, organizations should still expect changes to the law. Stakeholders should follow the rule-making process closely while making sure to submit any concerns to the CAG during the next comment period. Enforcement of the finalized law will begin July 1, 2020; however, organizations must make good faith efforts to comply starting January 1, 2020 and can be held liable for breaches of the law after this date.

Michael Osakwe is a tech writer and Content Marketing Manager at Nightfall AI.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Heads up: The Publish to web default is changing and it affects who can create public embed codes

December 18, 2019   Self-Service BI

Power BI publish to web enables users to quickly embed a report in a public website. It is not intended for users to share confidential or proprietary information. In the coming weeks we will roll out a change to our default settings that requires Power BI admins to allow public embedding before end users can create new embed codes. The change will not affect existing embed codes, which will keep working as they have been. We’re sharing this information now so you can understand the coming changes before they reach your organization.

This blog post describes functionality that has not yet been released. We expect the change to reach commercial cloud tenants the week of 1/6/2020 and government cloud tenants the week of 1/13/2020.

What is changing

By default, users will need to ask a Power BI admin to allow them to create a new publish to web embed code.

When a user attempts to create a new publish to web embed code through either the new look or conventional UI …

publish to web entrypoints Heads up: The Publish to web default is changing and it affects who can create public embed codes

… they will see the following dialog box.

publish to web contact admin Heads up: The Publish to web default is changing and it affects who can create public embed codes

If an embed code was previously created for the report, it will be shown as before.

Power BI admins will see a new option within the Publish to web tenant setting to Choose how embed codes work.

publish to web admin portal Heads up: The Publish to web default is changing and it affects who can create public embed codes

By default, it allows existing embed codes to work, which ensures no public web pages that host reports will be affected as we roll out this change. Users trying to create new embed codes will get the experience as described above. The Power BI admin can change the value to Allow existing and new codes if they desire people in their tenant to be able to create new embed codes.

It may also be helpful to establish a process for your organization to ensure your users know when it’s appropriate to use the Publish to web capability. You can even govern this process by selecting the Specific security groups option.

It is also valuable to periodically review reports published publicly from your organization. The Embed Codes section of the Power BI Admin portal shows reports published from your organization. The list shows embed codes created by all users and in all workspaces. You can export the list to run a review, view the report to check what it contains, or delete. Be mindful that when you delete an embed code, any users using it will lose access to it and it can’t be restored.

publish to web admin portal embed codes list Heads up: The Publish to web default is changing and it affects who can create public embed codes

Why we are making this change

For new organizations just starting out with Power BI, and for existing ones relying on it more than ever, we want to provide a default experience that empowers admins to choose how our public report functionality is used in their organization.  At the same time, we want to ensure that existing embed codes continue to work as we make the change to our defaults.

Organizations can already prevent creation of publish to web embed codes, choose which users to trust to use the feature correctly, block certain groups of users from publishing reports publicly, and review and manage existing embed codes. The publish to web setting  and the embed codes list available in the Power BI admin portal enable Power BI admins to perform these tasks.

How to prepare for this change

As a Power BI admin, familiarize yourself with the existing publish to web controls, and review existing publish to web embed codes. These are described in more details in other parts of this blog post.

As a Power BI user, you’ll need a Power BI admin to take action to ensure you can keep publishing reports to the web. If you plan to create a new embed code following the roll-out date, coordinate with your Power BI admin so you can create your embed code in a timely manner. If you’re not sure who is your Power BI admin, ask your internal IT team or internal support organization.

There are better options available for sharing reports internally

Power BI offers several capabilities that ensure authentication and permissions are enforced before displaying content to users.

You can share a report or dashboard with a user or group of users. This sends them an email with the link to the item you shared and gives the specific permission you choose. This is the preferred way for business users to share items, as it also appears in Shared with me and the Power BI mobile apps, making it easy for users to access and later find the items you sent them.

If you are sharing with external users, Power BI integrates with Azure Active Directory B2B to ensure the sharing is more secure and governable.

The embed in SharePoint Online option allows you to build modern pages with your content using the Power BI web part for SharePoint Online.

Our embed in website or portal capability makes it easy to add your report to your internal web pages. It handles the authentication and authorization so that only those users who should see the data can do so. It’s as easy-to-use as publish to web.

publish to web embed website or portal Heads up: The Publish to web default is changing and it affects who can create public embed codes

If you’re a developer looking to integrate Power BI into a custom built application, you can use  Power BI’s rich JavaScript and REST APIs to embed your content.

A summary of the methods to share content with others is available in our documentation.

What best practices should my organization adopt when using Publish to web

As a Power BI admin, it is important to put in place a review process for embed codes. The embed codes list in the Power BI admin portal allows you to review and, if needed, delete any embed codes as needed.

One challenge users may face is knowing how to contact their Power BI admin. We suggest you publish “Get Help” information using the options available in the Power BI admin portal.  When the option is set, the Get Help link available under the “?” icon will direct users to a support page you provide. On that page, you can inform users how they can request to use the Publish to web feature.

publish to web get help Heads up: The Publish to web default is changing and it affects who can create public embed codes

It’s also important to train your users how and when to use Publish to web. Helping your users understand what kinds of data are sensitive, confidential, and to recognize personally identifiable information that should not be shared publicly will help your organization work more effectively with data.

Next steps

We know this change will impact some users. We hope that by sharing this information early, you’ll be able to prepare for the change.

As always, we’re keen to hear your feedback in the comments below and new feature suggestions at https://ideas.powerbi.com.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

Announcing: Upgrade your classic workspaces to the new workspace experience public preview is rolling out

November 22, 2019   Self-Service BI

The Power BI new workspace experience has been General Available since April 2019. It offers improved security management, more flexible workspace roles, and new capabilities like a contact list to help you support your teams. Using the new workspace experience also enables several major features like shared and certified datasets and large models, to name a few. 

Today, we enabled workspace upgrade for all commercial cloud customers as an opt-in Public Preview. The roll-out may still be proceeding so if you don’t see it just yet don’t worry, it’s coming. For national cloud customers, we’re expecting to enable it in your tenants in early January 2020.

Below you can see how to find the upgrade option in the workspace settings pane if you’re a workspace Admin.

 

How to prepare for the upgrade 

  1. Learn about what to expect during an upgrade 
  2. Learn about the new workspace experience 

What should I do after upgrading?

You should do several things after you upgrade. It’s best to plan them before you upgrade:

Who can upgrade a workspace and how does it work?  

Workspace admins will be able to upgrade workspaces they own. Power BI admins can see which workspaces are classic workspaces by using the workspaces list in the Power BI admin portal. They can then work with workspace admins to upgrade the desired workspaces. 

Watch the video below to see the upgrade in action. Scroll forward to 17 minutes and 26 seconds to the workspace upgrade section. 

PlayVideoWorkspaceUpgrade Announcing: Upgrade your classic workspaces to the new workspace experience public preview is rolling out

What is the user experience after an upgrade? 

For end users, the upgrade keeps things like they were before you upgraded the workspace.  The URLs and item identifiers of the workspace, the items it contains, and the app you published from it don’t change. Features like data refresh, subscriptions, and so forth keep working as they did before the upgrade. 

Additionally, upgrading your Power BI workspace doesn’t change or remove the Office 365 Group. Any of the Teams, SharePoint Sites, OneNote notebooks, or other things tied to the Office 365 Groups continue to exist and work as they did before.   

Your end users will need to refresh their web browsers after the upgrade happens, so you may want to upgrade at off-hours. 

There are important differences if the workspace has installed or published content packs, these are described below at a high level, and more extensively in the documentation. 

 

What are the biggest changes to be aware of after upgrade? 

Firstly, your workspace in Power BI isn’t tied to the Office 365 Group anymore. That means that if you delete the Office 365 Group, the workspace won’t be deleted. Also, if you delete the workspace, the Office 365 Group won’t be deleted. 

The workspace access list and roles assigned to users are updated.  You’ll want to review these after upgrade to make sure you understand the changes. The Owners of the Office 365 Group are added as Admins of the workspace. The members of the Office 365 Group are not added individually to the workspace access list. Instead, the Office 365 Group is added with the appropriate workspace role.  

In most cases, the Office 365 Group will be given the Member role. However, if your workspace is ‘read-only’ then they’ll be given the Viewer role.  

You will want to review the permissions after upgrade section in the documentation to familiarize yourself with the details.  

 

What happens if I’m still using content packs? 

The new workspace experience doesn’t support content packs. Instead, use apps and shared datasets to distribute content in the workspace. We recommend removing published or installed content packs from the workspace prior to upgrade.  

However, if there are published or installed content packs when you upgrade, the upgrade process attempts to preserve the content. The top thing you should know is that we copy the content so it’s still available to users, but it won’t update any more. Because we copy the content URLs to items in content pack will change so you’ll need to distribute new URL to anyone you shared those items to. There is no way to restore the content pack or the association of content to the content pack after you upgrade. If you are still using content packs read through the documentation which covers what happens in detail.  

Can I move my organization entirely off classic workspaces? 

The public preview allows upgrading individual workspace. It doesn’t provide tools for bulk upgrade. New Office 365 Groups will continue to be listed as workspaces in Power BI. For organizations wanting to move their workspaces, the documentation offers some suggested approaches for doing so.  

Can I go back to a classic workspace after the upgrade?

For 30 days after the upgrade, you can use the go back to classic workspace option to return to an Office 365 Group based workspace. When you do this, some aspects of your classic workspace won’t be restored, like content packs.  You’ll want to learn more about going back to a classic workspace since there are a number of cases when you won’t be able to go back.

What is the Viewer role and how can I use it?

The Viewer role allows a user to view all the reports, dashboards, and Excel workbooks in the workspace. The role makes it easier for teams to share with colleagues who need to see all the content in the workspace, but don’t need to edit or add new content.

The users see the workspace in their workspaces list. If you assign the workspace to a Power BI Premium capacity, users with the Viewer role do not need a Power BI Pro license to view the content in the workspace. Some teams may find it easier to just give their colleagues the Viewer role than to publish and keep an app updated.

Users with the Viewer role can export summarized data from reports and datasets they have access to. This is because under the covers they are given the Read permission to the reports and the datasets in the workspace. However, to export summarized data or to Analyze in Excel, you’ll need to give them the Build permission on the datasets you want them to use. If you expect your users will need that, it’s likely easier to publish an app since this will give them Build permission on all the desired dataset, without you managing permission on each dataset manually.

It’s well worth familiarizing yourself with the details of the new workspace roles.

How can I tell the two types of workspaces apart?

In the workspaces list, if you expand the “…” menu, you’ll see the difference in available options. New workspace experiences workspaces show options for Workspace Settings and Workspace Access. If you configured a Workspace OneDrive, you’ll see the Files option too.  Classic workspaces show different options including Calendar, Members, Conversations, and a few others.

When you open the workspace’s content list, you’ll see that new workspaces have new options for Access, Settings in the top actions bar.

We’ve tried to keep consistency for the experiences across workspace types, so most users don’t notice there was a change and to minimize retraining costs.

Next Steps 

Learn more about upgrading classic workspaces in Power BI 

Learn more about the new workspace experience  

 

 

 

 

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More
« Older posts
  • Recent Posts

    • Switch from Old Record View to Kanban Board View to Maximize Business Productivity within Dynamics 365 CRM / PowerApps
    • PUNNIES
    • Cashierless tech could detect shoplifting, but bias concerns abound
    • Misunderstood Loyalty
    • Pearl with a girl earring
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited