• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Means

AI Weekly: What Andy Jassy’s ascension to CEO means for Amazon’s AI initiatives

February 6, 2021   Big Data
 AI Weekly: What Andy Jassy’s ascension to CEO means for Amazon’s AI initiatives

How open banking is driving huge innovation

Learn how fintechs and forward-thinking FIs are accelerating personalized financial products through data-rich APIs.

Register Now


This week, Jeff Bezos announced that he’ll step down as CEO at Amazon and transition to an executive chair role during the third quarter of this year. Amazon Web Services (AWS) CEO Andy Jassy will take his place, heading up a company currently valued at around $ 1.6 trillion.

Jassy, who joined Amazon in 1997 and has led AWS since its inception in 2003, believes Amazon’s decision to double down on AI early differentiated it from the competition. In May 2020, Gartner ranked AWS the industry leader in terms of vision and ability to execute on AI developer services. Beyond AWS, product recommendations on Amazon are powered by AI, as well as Alexa, Prime Air, Amazon Go, and the pick paths used in distribution centers to find products and fulfill orders.

So how might Jassy’s elevation to CEO impact Amazon’s AI initiatives? Interviews in recent years suggest Jassy is enthusiastic about cloud services tailored to the needs of machine learning practitioners, particularly for large enterprise applications. Controversially, Jassy has also said customers, not Amazon itself, are responsible for curbing their usage of potentially problematic AI technologies like facial recognition.

In a conversation with Silicon Angle in December, Jassy said he expects the majority of applications to be infused with AI in the next five to 10 years. While he endorses the idea of catering to expert machine learning practitioners who know how to train, tune, and deploy AI models, he asserts that AWS, more than rivals like Google Cloud Platform and Microsoft Azure, has aimed to “democratize” data science by lowering the barriers to entry.

“There just aren’t that many expert machine learning practitioners. And so it never gets extensive in most enterprises if you don’t make it easier for everyday developers and data scientists to use machine learning,” Jassy told Silicon Angle. He stressed the importance of “top-layer” AI services that transcribe audio, translate text, and more via APIs, without requiring customers to develop custom models. But he said the most important thing Amazon has done to make AI more accessible is building fully managed services.

“Enterprises have so much data that they want to use predictive algorithms to get value added,” Jassy said during a keynote at the Goldman Sachs Technology and Internet Conference in San Francisco last February.

SageMaker is one example of these fully managed services. Launched in 2016, it’s designed to let developers build, test, and maintain AI models from a single dashboard. Amazon says SageMaker, which gained nine new capabilities in December — following the launch of SageMaker Studio, an integrated development environment for machine learning — now has tens of thousands of customers.

It’s a safe bet that investments in services akin and complementary to SageMaker will accelerate with Jassy at the helm. So too, most likely, will the buildout of backend tools Amazon uses to solve challenges like call analytics.

“As Clay Christensen, author of The Innovator’s Dilemma, said … people hire products and services to do a job. They don’t really care what you do under the covers, but they’re hiring a product to do a job,” Jassy told Silicon Angle. “[Some people] don’t actually want to hire machine learning [experts]. They want to have an easier way to get automatic call analytics on all of their calls … And what we’re finding is that increasingly we’re using machine learning as the source to get those jobs done, but without people necessarily knowing that that’s what we’re doing behind the scenes.”

This work might be of a controversial nature. In an interview at Recode’s 2019 Code Conference, Jassy defended the company’s facial recognition service, Rekognition, while calling for the federal government to introduce national guidelines. (In September 2019, Recode reported that Amazon was writing its own facial recognition laws to pitch to lawmakers.) “Just because tech could be misused doesn’t mean we should ban it and condemn it,” he said, adding that Amazon would provide its facial recognition tech to governments, excepting those that violate the law or infringe on civil liberties.

Last year, Amazon declared a halt on the sale of facial recognition to police departments for 12 months but did not necessarily extend that restriction to federal law enforcement agencies. Prior to the moratorium, the company reportedly attempted to sell its facial recognition tech to U.S. Immigration and Customs Enforcement (ICE), and police in Orlando, Florida and other cities have trialed it.

A number of academics have called Jassy’s stance on facial recognition technology, which runs counter to that of many Amazon shareholders, problematic at best. Anima Anandkumar, the principal scientist for artificial intelligence at Amazon, told PBS Frontline that facial recognition isn’t “battle-tested” to work in the types of challenging conditions where law enforcement might use it (e.g., with low-light, grainy, or low-quality images). And dating back to 2018, AI researchers Joy Buolamwini, Timnit Gebru, and Deborah Raji have found that facial recognition software from companies like Amazon work best for white men and worst for women with dark skin. Amazon has publicly dismissed their coauthored work, the Gender Shades project.

Given this history, it seems unlikely that Jassy will extend the moratorium on facial recognition sales when it expires in July. He’s also unlikely to curtail the law enforcement relationships that Ring, Amazon’s smart home division, has fostered since its acquisition by Amazon in 2018. Ring has reportedly partnered with over 2,000 police and fire departments across the U.S. dating back to 2015, when Ring let the Los Angeles Police Department test how front-door footage might reduce property crimes.

Advocacy groups like Fight for the Future and the Electronic Frontier Foundation have accused Ring of using its cameras and Neighbors app (which delivers safety alerts) to build a private surveillance network via these partnerships. The Electronic Frontier Foundation in particular has singled Ring out for marketing strategies that foster fear and promote a sale-spurring “vicious cycle,” and for “[facilitating] reporting of so-called ‘suspicious’ behavior that really amounts to racial profiling.”

“We don’t have a large number of police departments that are using our facial recognition technology, and as I said, we’ve never received any complaints of misuse. Let’s see if somehow they abuse the technology — they haven’t done that,” Jassy told PBS Frontline in a 2020 interview. “And to assume they’re going to do that and therefore you shouldn’t allow them to have access to the most sophisticated technology out there doesn’t feel like the right balance to me.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

What it Means to be a Trusted Microsoft Partner

October 28, 2020   Microsoft Dynamics CRM

At PowerObjects, an HCL Technologies Company, we routinely refer to ourselves as an award-winning Microsoft Partner. What not everyone may recognize is that this partnership we advertise can reveal itself through several different types of relationships. The most common scenario is that a business decides to implement Dynamics 365 and/or Power Platform – and they need help getting it up and…

Source

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

AI Weekly: Detroit’s facial recognition battle is about the ends justifying the means

August 24, 2019   Big Data
 AI Weekly: Detroit’s facial recognition battle is about the ends justifying the means

This week, after U.S. Rep. Rashida Tlaib (D-MI) took to Twitter to criticize the Detroit Police Department’s use of facial recognition software, the force invited her to visit its Real Time Crime Center so she could see the technology in action. Rep. Tlaib readily agreed to a visit (how could she not), and we’ll presumably get to enjoy Act II of some choice political theater if and when she actually makes such a visit.

Rep. Tlaib’s remarks followed presidential candidate Sen. Bernie Sanders’ (D-VT) proclamation this week that he would ban all use of facial recognition technology in policing as part of his criminal justice reform plan. Tlaib had already planted a flag for herself opposing some facial recognition with the bill she cosponsored with Representatives Yvette Clarke (D-NY) and Ayanna Pressley (D-MA). Called the “No Biometric Barriers 5 to Housing Act of 2019,” it prohibits “the use of biometric recognition technology in certain federally assisted dwelling units, and for other purposes.”

Her public critique of the Detroit Police Department signals that she’s taking on a new front in the battle. But the brief, three-tweet exchange spoke to larger issues around the use of facial recognition technology within police work. And the department’s response to Rep. Tlaib’s gauntlet toss was telling, demonstrating that its position on facial recognition is that the ends justify the means.

If there was any doubt, after Rep. Tlaib moved on from the Twitter thread, the Detroit Police Department tweeted the following quote: “Abolishing the use of Facial Recognition protects only one group of individuals — VIOLENT CRIMINALS, said Police Chief James Craig.” (Capitalization theirs.)

Then Chief Craig started talking to local TV reporters about how effective his department’s high-tech policing efforts are. Three clips made it onto the department’s Twitter feed. Craig looks to be in full defense mode, standing in the Real Time Crime Center, home of the police department’s Project Greenlight, which incorporates real-time video from all over the city. He says what’s missing from discussions about the technology are the victims of crimes. Then he goes into detail about how responsible the department has been, how it’s taken great care to ensure that Project Greenlight and its facial recognition efforts are constitutional, how it has shown the Center to community groups representing many demographics, how it’s taken into consideration all the well-documented problems with misidentification, and how the police don’t actually perform the facial recognition in real time (it’s applied after video capture only if necessary, he said).

He also insisted that this real-time video is not surveillance, a note the mayor of Detroit, Mike Duggan, sounded in an earlier post. Mayor Duggan said that the facial recognition capability is entirely separate from Project Greenlight, which was developed to help keep an eye on traffic intersections without identifying faces. (A Georgetown study draws a different conclusion.)

You can see from Craig’s perspective that the department doesn’t live in a theoretical world of ethical maybes; they work every day in real and gritty environments, trying to reduce crime in a city with a stained reputation. Creating a fully operational policing tool in the form of the Real Time Crime Center, and adding the capability to use facial recognition software when applicable, must have taken monumental effort and political skill.

It doesn’t take any special technology to read the exasperated weariness on Craig’s face as he talks to reporters. In his mind, he’s done everything the right way. He’s not hiding anything.

But Craig has missed the point. Rep. Tlaib knows plenty about facial recognition technology, and what she knows is enough for her to call for its ban.

Hers is not the only voice decrying facial recognition in policing and elsewhere. There’s an increasing appetite for more regulation around the technology, and it’s already verboten in San Francisco; Somerville, Massachusetts; and Oakland, California. Many academics and researchers are loudly unequivocal about their objections to its existence. There has even been bipartisan support in Congress for legislation around the technology.

The ethical and practical problems involved in facial recognition technology have been exhaustively discussed, debated, and explored, including in this publication. Flaws in the technology, such as people with darker skin being identified with significantly less accuracy than people with lighter skin, are appalling and serve to reinforce existing biases and inequality in policing.

So the hometown beef between Tlaib and the Detroit Police Department is a microcosm of a much larger international debate about facial recognition. While improvements to the technology could theoretically address its bias, larger ethical issues, such as police abusing the technology to make arrests, control borders, or target specific groups of people, are not going away soon.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Trustpilot: Building AI for your company means not falling in love

July 28, 2019   Big Data
 Trustpilot: Building AI for your company means not falling in love

Trustpilot, an online review platform, faced a problem that most companies are dealing with these days: how to build AI to meet business goals or solve problems — from scratch. It’s not a one-size-fits-all answer, of course; it depends entirely on what you need to solve, if you build or bolt on a solution, and so on.

Size is another concern. A giant like Microsoft can snap up Semantic Machines and drastically improve its smart assistant capabilities rapidly — almost a case of the rich getting richer. But it’s more difficult if you’re not a tech giant. Trustpilot is by no means tiny, with seven offices and some 700 employees globally, but it’s also not a large enterprise company. Although it’s fresh off of a $ 55 million funding round, its resources aren’t infinite. And unlike the Microsofts of the world, it’s not a company that makes technology. It provides a service using technology, and as technology evolves alongside changing customer wants and business needs, companies like Trustpilot have to hustle to keep up.

The idea of taking your business from one that’s devoid of AI to one that depends on AI can be daunting. Having traversed this very journey, Ramin Vatanparast, Trustpilot’s chief product officer, had some thoughts on the subject and examples of what Trustpilot has done, which he shared in a presentation and fireside chat at Transform 2019.

Solve a problem, don’t fall in love with the technology

It’s all too easy to become enamored of AI. It’s the latest thing right now, and its potential is shiny, vast, and seemingly limitless. For businesses, that allure may be coupled with a nagging need to keep up with the Joneses, too. But Vatanparast said that Trustpilot consciously avoided falling in love with any particular technologies and instead focused on the outcomes the company wanted to achieve.

In fact, Trustpilot came to AI out of necessity rather than desire. As the company grew over the past 12 years, so did the amount of data it had to deal. Vatanparast said that Trustpilot snags an online review every two seconds and boasts more than 3 billion monthly impressions on those reviews each month. He expects that by the end of 2020, Trustpilot will clear 100 million total reviews. Trustpilot currently has 560TB of data. Of that, 55GB is unstructured or semi-unstructured data; 30TB is “cleaned, processed, or tagged”; and 17TB is in a data warehouse and available to customers to generate insights.

The company’s goal was to create the most trusted online review platform. Solving its data problems stood in the way of that goal, and Trustpilot determined that the only way it was going to be able to handle that data was by using AI.

In embarking on the AI journey, Trustpilot tried to avoid pitfalls. “Most of the companies are jumping on the revenue-based solution. So they’re looking to apply AI where there’s a possibility of generating revenue,” said Vatanparast. But Trustpilot focused more on using AI to improve its core product — online reviews — and ensuring that those reviews were trustworthy.

Cultural and structural shifts

With that vision in mind, Trustpilot started down the path to AI slowly and deliberately. “First, we created a layer on the company foundation,” said Vatanparast. That began with creating an “AI culture” within the company, which required educating the staff. They brought in a data scientist to work more closely with other teams, helping to demystify AI internally. Individuals and teams were encouraged to learn more about AI and try out ideas without the threat of consequences. “It’s OK to fail in the beginning, as we are in the beginning of the journey,” he said.

Eventually, experimentation gave way to more practical considerations and the need to actually make something useful. As Vatanparast noted, if the AI you’re working with isn’t aimed at meeting specific goals or solving problems the company is trying to address, you’re going to have problems getting it into production. But resources were a problem — although Vatanparast spun that as a positive: “We have limited resources, which sometimes is good. It helps you to focus more.” He added, “We tried to avoid [building] a laser cutter if we needed a knife.”

Into the weeds

For Trustpilot, there were two key areas that needed the artificial intelligence treatment: detecting and removing fake or spammy reviews, and providing better insights from review data.

The answer to the first issue is the Trustpilot Fraud Engine. Trustpilot already had a human team inspecting its reviews for fakes and spam, along with crowdsourced moderation by consumers and companies. That resulted in more than 5,000 notifications per month, but the volume was such that the company wasn’t able to scale it.

So Trustpilot built the Fraud Engine, using technologies and techniques including supervised and unsupervised ML, predictive analysis, statistical outlier detection, graph analysis, and neural networks. They had to create “fake score” parameters; if a given review hit a certain threshold, it would be automatically removed, and it would inform a human reviewer about the action. Now, 81% of the fake reviews and spam posts on Trustpilot’s platform are caught by AI.

The second thing Trustpilot needed to handle via AI was its Review Insights. “The problem was that successful companies who are working with Trustpilot … are getting around 1,000-10,000 reviews per month,” explained Vatnaparast. He said that it’s become a challenge for those companies who were relying on the virtuous cycle of consumer feedback to help them improve their products. Facing those thousands of comments, however, they’ve struggled to decipher which ones are useful, which ones they needed to reply to, and how to apply those findings to improving their products and services.

Put another way, Trustpilot had a hole in its platform. The data was there and available, but it wasn’t helpful. So the company built a sentiment classification model that detects positive and negative feedback in reviews. The system can understand from the reviews how consumers are feelings towards a product or service, and then businesses can make changes and track to see whether people are responding positively to those changes or not.

According to Vatanparast, the sentiment part is crucial, because even within a five-star review — which, going by stars alone, would appear to be a perfect review — there would be some negative sentiments expressed. It’s essentially, “I’m happy about everything, but — ,” he said. And that feedback, from a loyal or mostly satisfied customer, is more instructive “negative” feedback than you can glean from a one-star review, where the customer is just upset and may be grousing.

Equipped with the sentiment classification model, Trustpilot has been to analyze 35 million reviews and detect 85 million “sentiments.”

And then there’s the ethics

You can’t escape the issue of ethics in AI, even in places like Trustpilot’s review system where it doesn’t seem obvious that it would matter. Recalling some of the ethical challenges Trustpilot faced, Vatanparast said, “One example is around how you build a model and how accurate the model is. How do you treat the data and go after behaviors?”

The AI may flag 1,000 reviews on Trustpilot’s platform as fake or spammy, but 10 of those might actually be trustworthy. Is it better to keep the fraud detection model tight and accept that 1% false positive rate in order to remove those 990 offenders? Or should you loosen the model to avoid any false positives but then let more fake reviews slip through? Vatanparast wasn’t explicit about where Trustpilot falls on that particular question, but he did say that the company is constantly asking those sorts of questions and adjusting accordingly. But it’s a balance.

That also raises the issue of transparency. As Trustpilot is honing its models, it needs to show the math, as it were. If the process is constantly shifting, however slightly, then the results will change. For the purpose of maintaining trust with companies and customers using the Trustpilot platform, that cannot be a total black box. Again, though, there’s a balance: How much information is too much to share?

The next challenge

There is virtually endless room for improvement with an AI you deploy in your business. For Trustpilot, the next hurdle has to do with improving its language model.

In the AI field, there’s much fervent discussion about the need for clean data to train models. It’s a classic garbage in, garbage out situation. Ironically, Trustpilot needs a language model that works on junky data. “You could use Wikipedia to generate a language model,” he said, noting that such a model would have mostly correct grammar, usage, and spelling. “But you can’t apply that model to the [Trustpilot] reviews, because the reviews are not clean data. They’re not structured. They definitely in many cases don’t have the right spelling.”

He used the word “cheap” as an example. Used in different ways, “cheap” can mean “inexpensive,” which is a positive sentiment. But it can also mean “poor quality,” which is negative. Thus, out of context, the word “cheap” is useless, or at least significantly problematic, for creating any kind of reliable sentiment measure.

The English language is full of those sorts of oddities and quirks, so the task there is significant enough, but then there are numerous other languages to deal with in the same way. Trustpilot collects reviews from users all over the world in many different languages. “Creating models around the English language is definitely much easier, but when you look to other languages, it becomes tougher,” Vatanparast said.

Trustpilot has reached out to the IBM Watson team for help on Nordic languages, hoping to hand over its “messy” (and anonymized) data to improve Watson’s language models, and then ideally Trustpilot can use the updated models to be more accurate in its own system. It’s a process that ostensibly benefits both parties, and Trustpilot hopes to repeat the process with other organizations to keep improving the AI that is now core to its business.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Cloudera and Hortonworks merger means Hadoop’s influence is declining

October 7, 2018   Big Data

On Wednesday, Cloudera and Hortonworks announced a “merger of equals,” where Cloudera is acquiring Hortonworks with stock so that Cloudera shareholders end up with 60 percent of the combined company. The deal signifies that the Hadoop market could no longer sustain two big competitors. Hadoop has been synonymous with big data for years, but the market — and customer needs — have moved on. Several megatrends are driving this change:

The public cloud tide is rising

The first megatrend is the shift to public cloud. Companies of all sizes are increasing their adoption of AWS, Azure, and Google Cloud services at the expense of on-premises infrastructure and software. Enterprise server revenues reported by IDC and Gartner continue to decline. The Top 3 cloud providers (90 percent of the market) offer their own managed Hadoop/Spark services, such as Amazon’s Elastic Map Reduce (EMR). These are fully integrated offerings that have a lower cost of acquisition and are cheaper to scale. If you’re making the shift to cloud, it makes sense to look at alternative Hadoop offerings as part of that – it’s a natural decision-point. Ironically, there has been no Cloud Era for Cloudera.

Crushing storage costs

The second megatrend? Cloud storage economics are crushing Hadoop storage costs. At introduction in 2005, the Hadoop Distributed File System (HDFS) was revolutionary: It took servers with ordinary hard drives and turned them into a distributed storage system capable of parallel IO consumable by Java apps. There was nothing like it, and it was a crucial component that allowed large scale data sets that didn’t fit onto a single machine to be processed in parallel. But that was 13 years ago. Today, there is a plethora of much cheaper alternatives, primarily object storage services like AWS S3, Azure Blob Storage, and Google Cloud Storage. A terabyte of cloud object storage costs about $ 20 a month, compared to about $ 100/month for HDFS (not including the cost to operate it). Which is why Google’s HDFS service, for example, is merely a shim that translates HDFS operations onto object storage operations – because that’s 5x cheaper.

Faster, better, and cheaper cloud databases

Hadoop’s problems don’t end there, because it’s not just about direct competition from cloud-vendor Hadoop/Spark services and cheaper storage. The third megatrend is the advent of “serverless” cloud services that completely eliminate the need to run Hadoop or Spark at all. A common use case for Spark is to handle ad-hoc distributed SQL queries for users. Google was first to market with a revolutionary service called BigQuery in 2011 that solves the same problem in a completely different way. It lets you run ad-hoc queries on any amount of data stored in its object storage service (you don’t have to load it into special storage like HDFS). You just pay for the compute time: If you need 1,000 cores for 3.5 seconds to run your query, that’s all you pay for. There is no need to provision servers, install the OS, install software, configure everything, scale the cluster to 1,000 nodes, and feed and care for the cluster as you would with Hadoop/Spark. Google does all that, hence the moniker “serverless.” There are banks running 2,000-node Hadoop/Spark clusters operated and maintained by scores of IT people that can’t match BigQuery’s flexibility, speed, and scale. And they have to pay for all the hardware, software, and people to run and maintain Hadoop.

BigQuery is just one example. Other cloud database services are similarly massive scale, highly flexible, globally distributed “pay for what you use” databases. There’s start-up Snowflake, Google Big Table, AWS Aurora, and Microsoft Cosmos. They’re all much easier to use than a Hadoop/Spark install, and you can be up and running in 5 minutes for tens of dollars – no $ 500k purchase order and weeks of installation, configuration, and training required.

Python and R data science running on containers and Kubernetes

The fourth megatrend is containers and Kubernetes. Hadoop/Spark is not just a storage environment but also a compute environment. Again, back in 2005, this was revolutionary – the Map-Reduce approach of Hadoop provided a framework for parallel computation of Java applications. But the Java-centric nature (Scala-centric for Spark) of Cloudera and Hortonworks infrastructure is at odds with today’s data scientists doing machine learning in Python and R. The need to constantly iterate and improve machine learning models and to have them learn on production data means native deployment of Python and R models is a necessity, not a “nice to have.”

As recently as this week, the big Hadoop vendors’ advice has been “translate Python/R code into Scala/Java,” which sounds like King Hadoop commanding the Python/R machine learning tide to go back out again. Containers and Kubernetes work just as well with Python and R as they do with Java and Scala, and provide a far more flexible and powerful framework for distributed computation. And it’s where software development teams are heading anyway – they’re not looking to distribute new microservice applications on top of Hadoop/Spark. Too complicated and limiting.

A shift in data gravity

The net is that after a good 10 years of Cloudera and Hortonworks being the center of the Big Data universe, the center of gravity has moved elsewhere. The leading cloud companies don’t run large Hadoop/Spark clusters from Cloudera and Hortonworks – they run distributed cloud-scale databases and applications on top of container infrastructure. They do their machine learning in Python, R, and other languages that are not Java. Increasingly, enterprises are shifting to similar approaches because they want to reap the same speed and scale benefits. It’s time for the Hadoop and Spark world to move with the times.

Mathew Lodge is SVP of Products and Marketing at Anaconda. He has over 20 years’ diverse experience in cloud computing and product leadership. Prior to joining Anaconda, he served as Chief Operating Officer at Weaveworks, the container and microservices networking and management startup; and he was previously Vice President in VMware’s Cloud Services group and co-founded what became VMware’s vCloud Air IaaS service.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

to some folks, TDS still means The Daily Show rather than Trump Derangement Syndrome

August 21, 2018   Humor
 to some folks, TDS still means The Daily Show rather than Trump Derangement Syndrome

“However, according to Kathleen Hall Jamieson of Annenberg Public Policy Center, the term could backfire on Trump supporters because people might interpret it to mean that Trump is the one who is “deranged”, rather than those who criticize him.”

Trump Derangement Syndrome (TDS) is apparently a meme with a contestable meaning which finally received its denotative place in the Urban Dictionary.

Perhaps because 45* is so globally vilified, TDS is currently defined unfavorably for him, at least until it gets freeped.

Even 45* used TDS in reacting to those criticizing his Helsinki treason with a TDS claim while exhibiting his insecurities about other “derangement syndromes”.

Huckabee Sanders tried to riff on TDS, only to fail.

x

Trump Derangement Syndrome is becoming a major epidemic among Democrats. Instead of freaking out about the booming Trump economy why not celebrate it?

— Sarah Sanders (@PressSec) August 2, 2018

The problem is that its origins are more unstable, hence a more indeterminate meaning dependent on context, making it an unreliable ad hominem expression.

It’s too complicated for 45* to understand his use of TDS in context (citing eponymously an association with derangement) so the possible blame for its authorship failure should go to Stephen Miller, who is still trying to get a clue about manipulating memes for a Trumpian death cult.

Like Charles Krauthammer’s authorship of Bush derangement, eventually some people will figure out that TDS might be truly self-descriptive, especially now that Trump is now an even worse POTUS than Bush. As dim as Bush was, he was never truly unhinged.

Since TDS could be about a deranged Trump, the “rigged witch hunt” could really be about rigged witches.

Arguing that Trump’s opponents must instead recognize that the real problem is “Deranged Trump Self-Delusion,”

Adam Gopnik defined the “Syndrome” as President Trump’s “daily spasm of narcissistic gratification and episodic vanity.”

The real problem, according to Gopnik, is that President Trump is a man of “fears and fits” with an “appetite … for announcing his authority through violence, a thing capable of an unimaginable resonance and devastation”.[13]

en.wikipedia.org/…

x

Think of this perhaps as a new kind of death cult, which means that Donald Trump might be considered the superpower version of an Abu Bakr al-Baghdadi https://t.co/3USLcrAAmC

— LMD English (@LMDiplo) May 9, 2018

Bush Derangement Syndrome (BDS) is a neologism coined by neoconservative pundit Charles Krauthammer.

As he defined it in a 2003 column, it is:

“”…[T]he acute onset of paranoia in otherwise normal people in reaction to the policies, the presidency — nay — the very existence of George W. Bush.[1]

BDS was originally coined by Krauthammer to attack Howard Dean, Barbra Streisand, and Bill Moyers in a pathetic attempt to equate them to full-on conspiracy theories about Dubya like Cynthia McKinney‘s endorsement of 9/11 “truth,” and it soon became a convenient way for wingnuts to handwave away any criticism of Bush.[2]





Some reactionary idiots will waste resources trying to get TDS reversed, but like most RW humor, it rarely requires intellect.



x

.@urbandictionary has spoken.

The definitive meaning of “Trump Derangement Syndrome†is exactly as we thought… pic.twitter.com/Wu6uPqLXXb

— Philippe Reines (@PhilippeReines) August 16, 2018



Let’s block ads! (Why?)

moranbetterDemocrats

Read More

Data Availability 101: What Data Availability Means and How to Achieve It

June 24, 2018   Big Data
Data Availability 101 Data Availability 101: What Data Availability Means and How to Achieve It
Christopher Tozzi avatar 1476151897 54x54 Data Availability 101: What Data Availability Means and How to Achieve It

Christopher Tozzi

June 20, 2018

What is data availability, and what does data availability mean for your business? Keep reading for an overview of data availability and best practices for achieving it.

What Is Data Availability?

Put simply, data availability refers to the ability of data to remain accessible at all times, including following unexpected disruptions.

You could think of data availability as the data equivalent of application uptime. Software vendors and service providers like to talk about how they guarantee an extremely high level of uptime (Amazon famously promises “eleven nines” of availability, for example) for their applications and services. They do this because they want to emphasize how much effort they put into keeping their software up and running even when unexpected events — like a disk failure, cyber-attack or natural disaster — occur.

Data availability is similar in that it is a measure of how long your data remains available and usable, regardless of which disruptions may be occurring to the infrastructure or software that hosts the data.

Data Availability 101 banner Data Availability 101: What Data Availability Means and How to Achieve It

Why Does Data Availability Matter?

Ensuring data availability is important for a number of reasons. Some of them are obvious, and some less so.

Most obviously, if you depend on data to power your business, you want to keep that data available so that your business can continue to operate normally. Lack of availability of a database that contains customer email addresses might prevent your marketing department from conducting an email campaign, for example. Or the failure of a database that hosts account information might disrupt your employees’ ability to log into the applications that they need to do their jobs.

Data availability matters beyond your own organization, too. In many cases, your relationships with partner companies depend in part on the sharing of data, and if the data you are supposed to provide is unavailable, it could harm your partnerships.

In some cases, licensing agreements with vendors or customers may also require you to maintain certain levels of data availability. So could compliance frameworks; for example, article 32 of the GDPR mandates that companies retain “the ability to restore the availability and access to personal data in a timely manner.”

Achieving High Data Availability

gears 1236578 960 720 600x Data Availability 101: What Data Availability Means and How to Achieve It

Guaranteeing high rates of data availability requires addressing a number of factors that impact whether data is accessible:

The physical reliability of infrastructure

Are your servers and disks designed with data availability in mind? Is your data distributed across clusters so that it will remain available even if some parts of the infrastructure fail? Do you have tools and procedures in place to alert you to and help you resolve problems with the infrastructure? Are loads properly balanced across your infrastructure so that wear and tear is distributed evenly in order to maximize the longevity of the infrastructure as a whole? Are you prepared to handle disruptions like DDoS attacks, which could prevent access to your data?

Server and database recovery time

If your infrastructure does fail and you need to recover data, how quickly can you get your servers, disks and databases back up and running? The answer to this question depends not just on how quickly you can set up replacement hardware, but also how long it takes your software tools to perform tasks like rebooting operating systems and restarting database services.

Repair of corrupted data

Data can become unavailable not only when the infrastructure hosting it disappears, but also when the data becomes corrupted, and therefore unusable. How effective are your tools and processes at finding and repairing corrupted data?

Data formatting and transformation

Data that is not available in the correct format, or that takes a long time to transform from one format to another in order to become usable, can also cause data availability problems. Do you have the tools and processes in place to streamline data formatting and transformation?

Data Availability and Disaster Recovery

In most cases, data availability should be one component of your business’s disaster recovery and business continuity plan. Disaster recovery and business continuity involve making sure that all of your infrastructure, applications, and data are protected against unexpected disruptions.

When forming a disaster recovery plan, you should take into account the factors described above that impact data availability. You should also calculate metrics like Recover Time Objective (RTO), which measures how quickly you need to restore data in order to maintain business continuity, and Recovery Point Objective (RPO), which measures how much data you can afford to lose permanently following a disaster without causing a critical business disruption.

Learn more about the latest in high availability in our on-demand webcast.

Let’s block ads! (Why?)

Syncsort Blog

Read More

What the GDPR Means for Small US Etailers

June 22, 2018   CRM News and Info

Large corporations are not the only businesses governed by the
European General Data Protection Regulation, or GDPR, which became effective last month.

Small and mid-sized businesses also are subject to its provisions.

The regulation applies to the processing of personal data of individuals in the EU by an individual, a company or an organization engaged in professional or commercial activities.

“The common misconception is that if you don’t have an office in the EU, then the GDPR doesn’t apply to you,” said Cindy Zhou, principal analyst at Constellation Research.

However, shipping products to the European Economic Area (EEA) or sourcing them from the region are activities governed by the GDPR, she told the E-Commerce Times.

“The online marketplace has no borders,” noted Wesley Young, VP for public affairs at the Local Search Association.

That may be changing, however.

“We have seen many small businesses … exclude EU subjects from their clientele to avoid exposure to GDPR risks,” observed Andrew Frank, distinguished analyst at Gartner.

“This could impact assumptions about the frictionless global nature of e-business,” he told the E-Commerce Times.

GDPR Pitfalls for Unwary SMBs

The GDPR’s definition of personal data is “very broad,” LSA’s Young told the E-Commerce Times. “That would include IP addresses, location information, demographic information, and other general data used for targeting ads.”

The term “process” also is broadly defined, “and includes collecting and storing data, even if it isn’t further used,” he observed.

“The breadth of the GDPR’s application lends itself to be easily but unintentionally violated,” Young noted. For example, not following through on policy changes — failing to abide by new privacy policies, or not training staff to adhere to them — might be a violation.

Using data beyond the reason for which it was collected might be a violation, suggested Young, as consent has to be given for specific purposes.

The Ins and Outs of Consent

The GDPR “allows six different legal bases for collecting or processing personal data, of which consent is but one,” said Robert Cattanach, partner at
Dorsey & Whitney.

For most e-commerce situations, the transaction arguably constitutes a contract, and “additional consent may not be required” to collect personal data necessary to conclude the transaction, he told the E-Commerce Times. However, the question of consent will arise when a merchant engages third-party vendors to track or monitor customer behavior on its website.

Monitoring or aggregating customer behavior on a merchant’s website to learn when a customer decides to place an order or abandon the search by using cookies is one option, Cattanach noted.

“The UK’s Information Commissioner’s Office has opined that implied consent may be sufficient for such site tracking,” he pointed out. Therefore, a pop-up banner stating continued use of the site means consent to the use of cookies might suffice — although some of the German data protection authorities might not agree.

For the collection of personal data, a pop-up requiring the customer to independently agree to it would be necessary.

Two major issues remain unresolved, according to Cattanach:

  • What constitutes informed consent is still “a matter of ongoing dispute”; and
  • Responses to data subject access requests — such as the right to discover what data has been collected, correct errors, and request to be forgotten — “are legally less problematic on their face but, as a practical matter, may be more difficult to execute.”

Requests to be forgotten require merchants to establish process flows for the intake of such requests; set policies for when such requests will be granted or denied; and implement pocedures for responding within 30 days.

That is “no small undertaking,” Cattanach remarked, “which is why many SMBs have just decided to avoid triggering GDPR by expunging all existing data of EU residents and blocking EU IP addresses from accessing their websites going forward.”

Records of processing were expected to be the most challenging of the data subject rights requirements by 48.5 percent of more than 1,300 U.S. business users and consumers who participated in an online survey
CompliancePoint conducted this spring.

Only 29 percent of respondents to the CompliancePoint survey were fully aware of the GDPR; 44 percent were somewhat aware and 26 percent were unaware.

Other data subject rights problems they anticipated:

  • Accountability – 41 percent;
  • Consent and data portability – 39.7 percent each; and
  • Right to be forgotten – 35.3 percent.

GDPR Readiness

Twenty-four percent of business respondents to the CompliancePoint survey said their organizations were fully prepared for the GDPR, while 31 percent said they were somewhat prepared and 36 percent said their organizations were not prepared.

Following are some of the factors that kept the organizations of CompliancePoint respondents from being GDPR compliant:

  • Waiting to see what enforcement would be applied – 45.6 percent
  • Lack of understanding of the regulations – 39.7 percent;
  • No budget for compliance – 36.8 percent;
  • Low brand visibility – 33.8 percent; and
  • Unconcerned – 27.9 percent.

“SMBs are not immune to the risk of GDPR,” said Greg Sparrow, general manager at CompliancePoint.

“The risk of fines and regulatory action are the same for businesses large and small,” he told the E-Commerce Times.

The financial penalties — 4 percent of annual revenue or 20 million euros — are large, noted Constellation’s Zhou.

However “the indirect costs in terms of impact on customer trust and brand reputation may be even greater,” said Gartner’s Frank.

CRM Software to the Rescue

CRM systems that make it relatively easy to execute functions like erasure and consent modification “can help considerably,” Frank suggested.

“SugarCRM recently released a data privacy module that automates much of the processes for managing the required data governance,” remarked Rebecca Wettemann, VP of research at Nucleus Research.

Zoho, Hubspot, Salesforce and other CRM vendors “are touting GDPR compliance,” Zhou noted.

“SMBs running cloud CRM applications will likely find the easiest path to compliance, because data privacy capabilities have been or are being built into these applications,” Wettemann told the E-Commerce Times.

That said, CRM companies are data processors by definition, Zhou pointed out, and under the guidance of the company that collected the customer data.

“Privacy policies, cookie notices and age consent forms all need to be managed by the SMBs themselves,” she said, “and are often placed on a website or on the e-commerce site which isn’t related to the CRM solution.”
end enn What the GDPR Means for Small US Etailers


Richard%20Adhikari What the GDPR Means for Small US Etailers
Richard Adhikari has been an ECT News Network reporter since 2008. His areas of focus include cybersecurity, mobile technologies, CRM, databases, software development, mainframe and mid-range computing, and application development. He has written and edited for numerous publications, including Information Week and Computerworld. He is the author of two books on client/server technology.
Email Richard.

Let’s block ads! (Why?)

CRM Buyer

Read More

What the GDPR Means for Small US Etailers

June 22, 2018   CRM News and Info

Large corporations are not the only businesses governed by the
European General Data Protection Regulation, or GDPR, which became effective last month.

Small and mid-sized businesses also are subject to its provisions.

The regulation applies to the processing of personal data of individuals in the EU by an individual, a company or an organization engaged in professional or commercial activities.

“The common misconception is that if you don’t have an office in the EU, then the GDPR doesn’t apply to you,” said Cindy Zhou, principal analyst at Constellation Research.

However, shipping products to the European Economic Area (EEA) or sourcing them from the region are activities governed by the GDPR, she told the E-Commerce Times.

“The online marketplace has no borders,” noted Wesley Young, VP for public affairs at the Local Search Association.

That may be changing, however.

“We have seen many small businesses … exclude EU subjects from their clientele to avoid exposure to GDPR risks,” observed Andrew Frank, distinguished analyst at Gartner.

“This could impact assumptions about the frictionless global nature of e-business,” he told the E-Commerce Times.

GDPR Pitfalls for Unwary SMBs

The GDPR’s definition of personal data is “very broad,” LSA’s Young told the E-Commerce Times. “That would include IP addresses, location information, demographic information, and other general data used for targeting ads.”

The term “process” also is broadly defined, “and includes collecting and storing data, even if it isn’t further used,” he observed.

“The breadth of the GDPR’s application lends itself to be easily but unintentionally violated,” Young noted. For example, not following through on policy changes — failing to abide by new privacy policies, or not training staff to adhere to them — might be a violation.

Using data beyond the reason for which it was collected might be a violation, suggested Young, as consent has to be given for specific purposes.

The Ins and Outs of Consent

The GDPR “allows six different legal bases for collecting or processing personal data, of which consent is but one,” said Robert Cattanach, partner at
Dorsey & Whitney.

For most e-commerce situations, the transaction arguably constitutes a contract, and “additional consent may not be required” to collect personal data necessary to conclude the transaction, he told the E-Commerce Times. However, the question of consent will arise when a merchant engages third-party vendors to track or monitor customer behavior on its website.

Monitoring or aggregating customer behavior on a merchant’s website to learn when a customer decides to place an order or abandon the search by using cookies is one option, Cattanach noted.

“The UK’s Information Commissioner’s Office has opined that implied consent may be sufficient for such site tracking,” he pointed out. Therefore, a pop-up banner stating continued use of the site means consent to the use of cookies might suffice — although some of the German data protection authorities might not agree.

For the collection of personal data, a pop-up requiring the customer to independently agree to it would be necessary.

Two major issues remain unresolved, according to Cattanach:

  • What constitutes informed consent is still “a matter of ongoing dispute”; and
  • Responses to data subject access requests — such as the right to discover what data has been collected, correct errors, and request to be forgotten — “are legally less problematic on their face but, as a practical matter, may be more difficult to execute.”

Requests to be forgotten require merchants to establish process flows for the intake of such requests; set policies for when such requests will be granted or denied; and implement pocedures for responding within 30 days.

That is “no small undertaking,” Cattanach remarked, “which is why many SMBs have just decided to avoid triggering GDPR by expunging all existing data of EU residents and blocking EU IP addresses from accessing their websites going forward.”

Records of processing were expected to be the most challenging of the data subject rights requirements by 48.5 percent of more than 1,300 U.S. business users and consumers who participated in an online survey
CompliancePoint conducted this spring.

Only 29 percent of respondents to the CompliancePoint survey were fully aware of the GDPR; 44 percent were somewhat aware and 26 percent were unaware.

Other data subject rights problems they anticipated:

  • Accountability – 41 percent;
  • Consent and data portability – 39.7 percent each; and
  • Right to be forgotten – 35.3 percent.

GDPR Readiness

Twenty-four percent of business respondents to the CompliancePoint survey said their organizations were fully prepared for the GDPR, while 31 percent said they were somewhat prepared and 36 percent said their organizations were not prepared.

Following are some of the factors that kept the organizations of CompliancePoint respondents from being GDPR compliant:

  • Waiting to see what enforcement would be applied – 45.6 percent
  • Lack of understanding of the regulations – 39.7 percent;
  • No budget for compliance – 36.8 percent;
  • Low brand visibility – 33.8 percent; and
  • Unconcerned – 27.9 percent.

“SMBs are not immune to the risk of GDPR,” said Greg Sparrow, general manager at CompliancePoint.

“The risk of fines and regulatory action are the same for businesses large and small,” he told the E-Commerce Times.

The financial penalties — 4 percent of annual revenue or 20 million euros — are large, noted Constellation’s Zhou.

However “the indirect costs in terms of impact on customer trust and brand reputation may be even greater,” said Gartner’s Frank.

CRM Software to the Rescue

CRM systems that make it relatively easy to execute functions like erasure and consent modification “can help considerably,” Frank suggested.

“SugarCRM recently released a data privacy module that automates much of the processes for managing the required data governance,” remarked Rebecca Wettemann, VP of research at Nucleus Research.

Zoho, Hubspot, Salesforce and other CRM vendors “are touting GDPR compliance,” Zhou noted.

“SMBs running cloud CRM applications will likely find the easiest path to compliance, because data privacy capabilities have been or are being built into these applications,” Wettemann told the E-Commerce Times.

That said, CRM companies are data processors by definition, Zhou pointed out, and under the guidance of the company that collected the customer data.

“Privacy policies, cookie notices and age consent forms all need to be managed by the SMBs themselves,” she said, “and are often placed on a website or on the e-commerce site which isn’t related to the CRM solution.”
end enn What the GDPR Means for Small US Etailers


Richard%20Adhikari What the GDPR Means for Small US Etailers
Richard Adhikari has been an ECT News Network reporter since 2008. His areas of focus include cybersecurity, mobile technologies, CRM, databases, software development, mainframe and mid-range computing, and application development. He has written and edited for numerous publications, including Information Week and Computerworld. He is the author of two books on client/server technology.
Email Richard.

Let’s block ads! (Why?)

CRM Buyer

Read More

Syncsort’s New Branding: What It Means for You

May 23, 2018   Big Data
blog title brand day Syncsort’s New Branding: What It Means for You
Jamie Heckler avatar 1486569186 54x54 Syncsort’s New Branding: What It Means for You

Jamie Heckler

May 22, 2018

We are excited about the roll out of Syncsort’s new branding! Today’s launch is the result of a months-long journey to better share our story and deepen our connection with our customers.

Syncsort brand refresh reflects meaningful growth and transformation

Over the past two years ago, our company has experienced tremendous growth. Through strategic mergers and acquisitions, we’ve doubled in size, nearly tripled our revenue, and added market-leading data quality, capacity management, data availability and security software to expand our world-class Big Iron to Big Data portfolio.

As with any growth of this size, we have seen considerable change. We’ve added more remarkably talented individuals to our ever-expanding Syncsort team, reshaping our organization into a new whole that’s more valuable than the sum of its parts.

The refreshed Syncsort brand is both a reflection of the transformation we have already experienced, as well as the further potential we see for ourselves.

We organize data everywhere, to keep the world working

One key goal of our brand initiative was to more clearly communicate how we help our customers.

Technical experts can find themselves giving lengthy speeches full of technical jargon. We recognize that we too have been guilty of such “tech speak.” Syncsort’s new brand messaging helps to clearly define how our products help customers: We organize data everywhere, to keep the world working.

Jennifer Cheplick, Senior Director of Product Marketing at Syncsort, shares her thoughts on the subject. “As the leader of product marketing, it has been an exciting time to see our product portfolio expand to meet the needs of today’s data-driven companies. The branding initiative gives us a great platform to communicate to our customers and partners all the ways our solutions can help them be successful in their own businesses.  The modern and dynamic feel of the new brand is a perfect match for our cutting-edge products!”

We optimize, assure, integrate and advance

We’ve organized our expanded portfolio into segments that define their value to your business. As part of our brand refresh, we are consolidating our network of websites into a redesigned Syncsort.com that houses our complete product portfolio:

  • Syncsort Optimize gives your enterprise the best range of products and unmatched expertise to deliver optimum performance and resource utilization for critical data systems. This portfolio of data infrastructure optimization products includes mainframe software such as MFX, ZPSaver Suite, and DL/2 as well as our Athene cross-platform capacity management product.
  • Syncsort Assure products provide market-leading business continuity and data protection for IBM i servers – spanning high availability, disaster recovery, migration, data replication and security. This portfolio contains Cilasoft, Enforcive, iTERA, MIMIX and Quick-EDD products as well as our Managed Services offered in this area.
  • Syncsort Integrate products let you unlock valuable data from legacy systems including mainframes, IBM i servers, and enterprise data warehouses (EDWs), ensuring that data is complete, accurate and trusted. This portfolio of data integration and quality software includes Ironstream, DMX-h and Trillium data quality products.
  • Syncsort Advance is a portfolio we expect to expand through continued investment in new data innovations to extract even more value from your critical data. Currently this area showcases our products that support your cloud strategies.

Syncsort CPO David Hodgson is excited about the new product segmentation. He states, “The rebranding gives us an updated framework for categorizing our greatly expanded and growing product portfolio, allows us to better focus our product strategy and roadmap, and provides a fresh image for the new Syncsort.”

Hodgson adds, “We are proud of what we have achieved as a company and we are proud of the name Syncsort. Also, in my organization, everyone is a customer facing, brand ambassador for Syncsort. With the new logo and the new positioning, it feels like a new energy has been released for us to go out and represent who we truly are; a dynamic, vibrant organization that is changing the face of data management for the largest organizations in the world.”

Related press release: Syncsort’s Data Integration Innovations Address Top 2018 Big Data Trends

Advancing data

“Through acquisitions and in-house development, Syncsort has greatly expanded our portfolio of products,” says Syncsort CTO Tendü Yoğurtçu, PhD. “This is an exciting time to share our new brand with the world and make ourselves known as the organization that optimizes, assures, integrates and advances data.”

Yoğurtçu closely monitors the latest technology trends, looking for new avenues for Syncsort to advance data. Her team then marries these modern tools with our unmatched expertise in legacy systems that remain critical to enterprises across the globe.

Yoğurtçu notes how well the new branding aligns with Syncsort’s strategy and community. “Not only is new branding inspiring and fresh, it also gives me the ability to focus our technology organization and development around the product areas we defined as part of our vision. I am very excited about how the new branding reflects Syncsort’s innovative culture and customer focus.”

blog banner Bridging Gap Syncsort’s New Branding: What It Means for You

Brand visuals reflect messaging themes

Syncsort’s new brand visuals are a reflection of our brand themes. The brand mark within our new logo symbolizes how Syncsort organizes data by organizing the smaller components into an abstract S shape. The brand mark, as well as other supporting brand visuals, are set on a 45-degree angle toward the upper right which invoke the idea of upward and forward motion to support our brand concept of advancing data.

Our new color palette is also an excellent representation of our organization. The traditional dark blue represents Syncsort’s rich experience within the industry. The bright blue and violet build on the dark blue as a modern take on a monochromatic scheme. And lastly, the pop of lime green nods at our embrace of new technologies and innovations for the future of data.

A Syncsort brand that’s ready for tomorrow

Successfully refreshing a brand is no small task. We’re proud of what we’ve put together because we believe it tells our story well. We hope it helps connect with you in a more meaningful way.

Take a spin around our updated site and let us know what you think!

Let’s block ads! (Why?)

Syncsort Blog

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited