• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: center

Center for Applied Data Ethics suggests treating AI like a bureaucracy

January 22, 2021   Big Data
 Center for Applied Data Ethics suggests treating AI like a bureaucracy

How open banking is driving huge innovation

Learn how fintechs and forward-thinking FIs are accelerating personalized financial products through data-rich APIs.

Register Now


A recent paper from the Center for Applied Data Ethics (CADE) at the University of San Francisco urges AI practitioners to adopt terms from anthropology when reviewing the performance of large machine learning models. The research suggests using this terminology to interrogate and analyze bureaucracy, states, and power structures in order to critically assess the performance of large machine learning models with the potential to harm people.

“This paper centers power as one of the factors designers need to identify and struggle with, alongside the ongoing conversations about biases in data and code, to understand why algorithmic systems tend to become inaccurate, absurd, harmful, and oppressive. This paper frames the massive algorithmic systems that harm marginalized groups as functionally similar to massive, sprawling administrative states that James Scott describes in Seeing Like a State,” the author wrote.

The paper was authored by CADE fellow Ali Alkhatib, with guidance from director Rachel Thomas and CADE fellows Nana Young and Razvan Amironesei.

The researchers particularly look to the work of James Scott, who has examined hubris in administrative planning and sociotechnical systems. In Europe in the 1800s, for example, timber industry companies began using abridged maps and a field called “scientific forestry” to carry out monoculture planting in grids. While the practice resulted in higher initial yields in some cases, productivity dropped sharply in the second generation, underlining the validity of scientific principles favoring diversity. Like those abridged maps, Alkhatib argues, algorithms can both summarize and transform the world and are an expression of the difference between people’s lived experiences and what bureaucracies see or fail to see.

The paper, titled “To Live in Their Utopia: Why Algorithmic Systems Create Absurd Outcomes,” was recently published and accepted by the ACM Conference on Human Factors in Computing Systems (CHI), which will be held in May.

Recalling Scott’s analysis of states, Alkhatib warns against harms that can result from unhampered AI, including the administrative and computational reordering of society, a weakened civil society, and the rise of an authoritarian state. Alkhatib notes that such algorithms can misread and punish marginalized groups whose experiences do not fit within the confines of data considered to train a model.

People privileged enough to be considered the default by data scientists and who are not directly impacted by algorithmic bias and other harms may see the underrepresentation of race or gender as inconsequential. Data Feminism authors Catherine D’Ignazio and Lauren Klein describe this as “privilege hazard.” As Alkhatib put it, “other people have to recognize that race, gender, their experience of disability, or other dimensions of their lives inextricably affect how they experience the world.”

He also cautions against uncritically accepting AI’s promise of a better world.

“AIs cause so much harm because they exhort us to live in their utopia,” the paper reads. “Framing AI as creating and imposing its own utopia against which people are judged is deliberately suggestive. The intention is to square us as designers and participants in systems against the reality that the world that computer scientists have captured in data is one that surveils, scrutinizes, and excludes the very groups that it most badly misreads. It squares us against the fact that the people we subject these systems to repeatedly endure abuse, harassment, and real violence precisely because they fall outside the paradigmatic model that the state — and now the algorithm — has constructed to describe the world.”

At the same time, Alkhatib warns people not to see AI-driven power shifts as inevitable.

“We can and must more carefully reckon with the parts we play in empowering algorithmic systems to create their own models of the world, in allowing those systems to run roughshod over the people they harm, and in excluding and limiting interrogation of the systems that we participate in building.”

Potential solutions the paper offers include undermining oppressive technologies and following the guidance of Stanford AI Lab researcher Pratyusha Kalluri, who advises asking whether AI shifts power, rather than whether it meets a chosen numeric definition of fair or good. Alkhatib also stresses the importance of individual resistance and refusal to participate in unjust systems to deny them power.

Other recent solutions include a culture change in computer vision and NLP, reduction in scale, and investments to reduce dependence on large datasets that make it virtually impossible to know what data is being used to train deep learning models. Failure to do so, researchers argue, will leave a small group of elite companies to create massive AI models such as OpenAI’s GPT-3 and the trillion-parameter language model Google introduced earlier this month.

The paper’s cross-disciplinary approach is also in line with a diverse body of work AI researchers have produced within the past year. Last month, researchers released the first details of OcéanIA, which treats a scientific project for identifying phytoplankton species as a challenge for machine learning, oceanography, and science. Other researchers have advised a multidisciplinary approach to advancing the fields of deep reinforcement learning and NLP bias assessment.

We’ve also seen analysis of AI that teams sociology and critical race theory, as well as anticolonial AI, which calls for recognizing the historical context associated with colonialism in order to understand which practices to avoid when building AI systems. And VentureBeat has written extensively about the fact that AI ethics is all about power.

Last year, a cohort of well-known members of the algorithmic bias research community created an internal algorithm-auditing framework to close AI accountability gaps within organizations. That work asks organizations to draw lessons from the aerospace, finance, and medical device industries. Coauthors of the paper include Margaret Mitchell and Timnit Gebru, who used to lead the Google AI ethics team together. Since then, Google has fired Gebru and, according to a Google spokesperson, opened an investigation into Mitchell.

With control of the presidency and both houses of Congress in the U.S., Democrats could address a range of tech policy issues in the coming years, from laws regulating the use of facial recognition by businesses, governments, and law enforcement to antitrust actions to rein in Big Tech. However, a 50-50 Senate means Democrats may be forced to consider bipartisan or moderate positions in order to pass legislation.

The Biden administration emphasized support for diversity and distaste for algorithmic bias in a televised ceremony introducing the science and technology team on January 16. Vice President Kamala Harris has also spoken passionately against algorithmic bias and automated discrimination. In the first hours of his administration, President Biden signed an executive order to advance racial equality that instructs the White House Office of Science and Technology Policy (OSTP) to participate in a newly formed working group tasked with disaggregating government data. This initiative is based in part on concerns that an inability to analyze such data impedes efforts to advance equity.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Adding New Users to Allow Access to Power Apps Portals Admin Center

July 31, 2020   Microsoft Dynamics CRM

A common issue for new Power Apps Portals deployments is that you may need to have multiple internal employees that need admin level access to the Power Apps Portals admin center. These users would need this level of access to perform common Portal Actions, including: Restart the portal Update Dynamics 365 URL Install Project Service Automation Extension Install Field Service extension for partner…

Source

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

The Pentagon’s Joint AI Center wants to be like Silicon Valley

May 24, 2020   Big Data
 The Pentagon’s Joint AI Center wants to be like Silicon Valley

It was a busy week for defense-focused AI: DarwinAI signed a partnership with Lockheed Martin, the world’s largest defense contractor, to work on explainable AI for solutions. Robotics company Sphero, maker of the BB-8 droid from Star Wars, spun out Company Six, which will focus on military and emergency medical applications. And Google Cloud was awarded a Pentagon contract for its multi-cloud solution Anthos this week, even though the $ 10 billion JEDI contract between AWS and Microsoft Azure is still tied up in courts.

But one of the most notable contracts was a five-year, $ 800 million contract with Booz Allen Hamilton to support the Joint AI Center (JAIC) warfighting group. (Another contract to create the Joint Common Foundation, a cloud-based AI development environment for the military, is also in the works.) This week, the JAIC also shared more details about Project Salus, a series of predictions from 40 models about supply chain trends in the age of COVID-19 for the U.S. Northern Command and National Guard. The JAIC took Salus from an idea to AI for military decision makers in two months.

The JAIC was created in 2018 to lead Pentagon efforts to use more artificial intelligence, and it’s also tasked with leading military ethics initiatives. The group is currently in the midst of a major transition, perhaps more than at any time since its creation. Air Force Lt. Gen. Jack Shanahan, who has served as JAIC director since the organization’s founding, steps down next week. VentureBeat spoke at length with his incoming replacement, current JAIC CTO Nand Mulchandani, about the future of the organization.

Shanahan spearheaded Project Maven in 2017 with companies like Xnor, Clarifai, and Google, where thousands of employees opposed the contract. For the sake of national security and the economy, Shanahan argues that business, academia, and military must work together on AI solutions as algorithmic warfare becomes a reality.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Although Shanahan retires August 1, according to the Air Force website, Mulchandani will take over as acting director in the next two weeks, a JAIC spokesperson told VentureBeat in an email today. He will serve as acting director until a flag officer or 3-star general is confirmed as director later this year. Unlike Shanahan, Mulchandani spent most of his career building startups in Silicon Valley.

In March, the JAIC finished revamping the way it approaches AI projects and workflows and moved to adopt a different organizational structure. Mulchandani said it is now the only organization in the Pentagon to have product managers and an AI product development approach reminiscent of Silicon Valley startups and enterprise sales teams.

In an interview with VentureBeat, Mulchandani talked in detail about the March restructuring, how the JAIC is helping the U.S. military develop predictive early warning systems, how the Joint AI Center will use Google Cloud’s Anthos and the $ 800 million Booz Allen Hamilton contract, and how COVID-19 is influencing the group’s operating mission.

This interview was edited for brevity and clarity.

VentureBeat: How will the Booz Allen Hamilton contract play a role in supporting JAIC teams? And will the Joint Common Foundation (JCF) or any other part of JAIC be part of the Google Cloud Anthos deal that was signed with the Pentagon earlier this week?

Mulchandani: We had to find somebody who could help us pull together the sort of software-hardware interface and bring that level of expertise in terms of dealing with the tactical edge or applying models to different form factors with drones and things like that. And that’s where the [Booz Allen Hamilton] joint warfighting contract came in, for dealing with incredibly complex deployments across geographic regions and dealing with different pieces of hardware, software, etc.

So that’s what we’re relying on them for. Their job really is to work with us on pulling in the best AI technology to be able to deploy, so in some sense we’re not relying on Booz to actually build out all the artificial intelligence for us. They’re here to help us assemble and pull the best of these fits together.

Similarly, we have a Joint Common Foundation RFP out of the street that we’re actually going through the procurement process [for] right now. We’ll have news on that hopefully soon.

The JCF is different in that it is a more of an AI development environment, as opposed to an infrastructure management system. The basic goal is when we build out the JCF one day hopefully very soon, we will bring best of breed AI tools and products into the system to allow for AI development.

VentureBeat: So will JAIC be using Anthos then?

Mulchandani: That one [Anthos] came in from the Defense Innovation Unit (DIU). And the sort of multi-cloud model I mean, it’s tied up in this whole discussion around JEDI and cloud infrastructure and the JCF. The DIU did that particular one, and they’ve done it as a sort of general purpose contract that allows DoD [Department of Defense] customers to use Anthos as a multi-cloud management system.

The JCF is going to be fairly agnostic toward the infrastructure-level stuff. We’ll support pretty much all the standard stuff that will support Google Cloud, Azure, Amazon — these are all targets for workloads. And when JEDI shows up, the JCF will be pretty agnostic toward all types of cloud infrastructure and targets. So it’s somewhat related, but those are kind of two separate things on that front.

VentureBeat: Can you walk me through some of the structural changes that the JAIC went through earlier this year?

Mulchandani: The reorientation of JAIC basically came around the need for product managers and product teams that build world-class products, and we need a missions team with colonels from our military running those missions that understand to a very deep level what our needs and requirements are.

What we’ve done is taken two sort of key models and applied them. So one is the venture capital model, which is really more about how … we approach investing in and selecting the products and projects that we take on. And the other I would characterize as an enterprise sales model.

Those are the two models that we apply because they are well understood. Thousands of companies do this every day out in industry in the world.

I spent 26 years in Silicon Valley before going to the DoD. I’ve been here one year, and the most natural pattern for me was the venture capital model because it deals with early-stage technology. At the DoD, that’s a little trickier, because this is the United States military, and we have lots of people who are trained in doing military things, not building software.

If you’ve ever been in an enterprise technology company, [you know] how we build and run sales teams and deal with volumes of customers, and how you triage them and grow that customer base to convert them from leads to customers. “Sales” could have a negative connotation, like we’re trying to sell something, but the way we’ve modeled it is more in terms of customer relationships and customer knowledge.

So we now have a product group that’s composed of a couple of key types of people. So number one is the product manager. The product manager is in charge of owning that sort of requirement-gathering, functional specs [and] selecting and working with the engineering team to build this thing out and make it happen.

We’ve got a very robust AI and data science team. We’ve got a test and eval section testing products to make sure that they’re there, but it’s also imbibing and making sure that we’re following many of the ethical principles. And the missions team is really — you should think of [that] as our enterprise sales team.

VentureBeat: How is Project Salus a step in a new direction for JAIC?

Mulchandani: So what we did with Salus was a great example of this model, where instead of spending a year over-specifying the product, you get the core needs and requirements in a really basic depth. The funny part was the first version of Salus that we showed NORTHCOMM and the [National] Guard; in some sense, they were a little puzzled because they were like “What’s this? This seems like really, really half-baked.” And we had to explain to them that this is the new world, that in seven days or eight days this is what you typically get.

It’s a bare bones product that barely works. Every single company that I started, you know, the first board meeting after we raised a couple million dollars from investors, you go show the first product, the “Hello World” product to the board, and if they haven’t done venture capital before or early tech, some of them do fall out of their chairs and say, “Wait, I just spent a couple of million bucks for this company and this is what you built?” It’s like, “Well no, it’s not ready yet. We’re going to stay close to you and learn what you want and what you need, and we’re going to iterate with you.”

VentureBeat: How is COVID-19 sort of changing the way that JAIC looks at its mission, or things of that nature?

Mulchandani: What [Salus] actually taught us to do was to get this sort of concept of code thing into practice, this idea of having the product manager, project managers, the legal and policy folks, the test and eval, everyone, co-residents, the missions team, all aligned and working together. We basically want to bottle up this experience and this method of building products, and we want to replicate it.

This all stemmed a lot from an off-site that we did a couple of months ago. And it gave us a chance for the entire JAIC to actually go off-site, sit together, think, ideate, spend some time together outside of the office. And one of the things that struck us was every great organization has a business model and a business plan that it operates on.

When you start a company. What is your world-changing idea? What’s your business plan and model to go attack that business plan? That led us to our mission and charter around leverage and creating repeatability and industrialization and scale around AI.

VentureBeat: In an interview earlier this week, Lt. Gen. Shanahan brought up this notion of a national predictive warning system and that JAIC will be involved with more of that in the future, in part to respond to COVID-19 and as part of a general effort to make the DoD more predictive than reactive. Can you talk a little bit more about that idea of creating early warning systems with AI?

Mulchandani: Typically, in the popular imagination [with AI], folks go to killer robot and Terminator immediately. But what you’re going to realize is the biggest revolution that’s going to occur with AI in the short term, and is already happening, is in decision support.

That is the biggest and most critical area where it can aid and unambiguously support, and [it’s] also easily deployable without any of the sort of ethics and other issues that we’re grappling with longer-term around the economy and things like that.

What makes AI so powerful — and this is where Lt. General Shanahan and all of us are sort of pushing this idea of this area in AI — is the ability to highlight and come up with a non-obvious result. When you look at many of the more surprising results in AI recently — all the stuff around Go for instance — the one phrase that we all have to look at is the one which says, “The computer behaved in a non-human way.” That to me was the critical insight into that whole drama — the non-obvious nature of the solution that the system came up with. And that typifies AI, and that’s where it’s different from statistics and the normal math that people use.

It’s the non-obvious things that humans won’t think about, which is where we’re going with predictive capabilities, and we want to apply this sort of model, not only to pandemic modeling, but of course we’re going to apply this to … joint warfighting and many of the decision support stuff that we need at the DoD to be able to fight and win the next war, if and when it happens.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Stay Informed with Message Center notifications for Dynamics and PowerApps

May 10, 2020   Microsoft Dynamics CRM

I have been asked a few times recently how certain users or administrators can stay informed on important changes such as service updates and other major updates that may impact their environment. The Message Center available in the Microsoft 365 Admin Center gives you a high-level overview of planned changes, dates, and how this may affect your users. 

Most Dynamics users will not have access to Message Center, unless they have been assigned one of the Admin Roles that gives you access. Not all Admin roles will have access to the Message Center. There is a list of roles that DO NOT have access to messages HERE

In regards to Dynamics, those that need access to this information may have the Dynamics 365 Admin or Power Platform Admin roles. However, there are others that will give access to these messages.

 pastedimage1586435286800v1 Stay Informed with Message Center notifications for Dynamics and PowerApps

There are some new read-only roles, one of which provides access to Message Center messages without the additional access that comes with the admin roles.

 pastedimage1586435295394v2 Stay Informed with Message Center notifications for Dynamics and PowerApps

One of these being the Message Center Reader role:

 pastedimage1586435305317v3 Stay Informed with Message Center notifications for Dynamics and PowerApps

When a user is granted this role, they can navigate to https://admin.microsoft.com/Adminportal/

Under the Health tab, they will see Message Center

pastedimage1586435316768v4 Stay Informed with Message Center notifications for Dynamics and PowerApps 

It’s important to note, that they will also have read-only access to users, groups, domains and subscriptions.

From there, they can change their preferences for which messages they want to see. It will take up to 8 hours for changes to these settings to take effect. 

 pastedimage1586435325278v5 Stay Informed with Message Center notifications for Dynamics and PowerApps

You can also select the option to get a weekly digest of messages, including the ability to add two additional emails that can receive these notifications. Since the Admin roles may not be as granular as some would hope, you can create a distribution list to send to a specific group of people that may need these notifications to plan accordingly. This weekly digest goes out on Mondays only and you must be opted in by the prior Saturday to get the digest for that week

 pastedimage1586435334292v6 Stay Informed with Message Center notifications for Dynamics and PowerApps

pastedimage1586435344974v7 Stay Informed with Message Center notifications for Dynamics and PowerApps

If the Message Center Reader role provides too much access, such as visibility to other service notifications, you can setup one user that is allowed to have visibility to these messages and configure their preferences to only show Dynamics messages and then add other user email addresses in their preferences or setup a forwarding rule within Outlook.

Since this is only weekly, it obviously does not provide real-time notifications as some would like. There are two options to configure real-time notifications, using either Power Automate or the Microsoft Office 365 Service Communications API.

There is a great blog post HERE from one of our Fast Track Solution Architects on how to use a Power Automate flow to configure and receive real-time notifications.

Additionally, another one of our Premier Field Engineers, Ali Youssefi, has a blog post HERE on using the Microsoft Office 365 Service Communications API to setup real-time notifications.

Thanks for reading!

Aaron Richards

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Zoom Oracles Its Way to Center Stage

May 4, 2020   CRM News and Info

Oracle and Zoom Video Communications just entered a deal that for once is more about technological audacity than about dollars. The companies on Tuesday announced a partnership to host Zoom on Oracle Cloud Infrastructure.

In just a few months — basically since the beginning of the novel coronavirus pandemic — Zoom has seen demand for its service grow from about 10 million daily meeting participants to more than 300 million.

For comparison, recall that the U.S. population is in the neighborhood of 327 million people. So in the blink of an eye, the companies say they’ve been able to spin up a facility that can serve at least some of the video communications needs of a global community with a user set almost equal to the U.S.

As important as this deal is for Zoom, by helping it to scale massively, the announcement also provides a concrete example of Oracle’s prowess and capacity. It’s a story the company will be telling for some time.

Travel Out, Video In

Over the last two decades, companies have turned to video conferencing in times of economic stress.

Business travel spending predictably
drops during economic slumps, according to Statista, a market research firm that tracks issues related to business travel. such as total spend, hotel market size, and airline revenues worldwide.

Spending flatlined for four years at the beginning of the century coinciding with the dot-com era, and again in the wake of the sub-prime mortgage crisis, but each time travel spending came back.

For instance, travel spending in 2003 was just a little less than US$ 600 billion; in 2009 it settled at $ 839.8 billion after reaching nearly 1 trillion dollars the year before.

What’s missing from these statistics is the growth of the video conferencing market.

The market
generated $ 3.85 billion in revenues in 2019, according to Grand View Research. It projects a CAGR of 9.9 percent between now and 2027.

Big Industry, Big Disruption

You might think that video conferencing is no match for business travel. Although business travel always rebounds, because there are valid reasons for it, the video conferencing market looks exactly like a disruptive innovation growing from the grass roots.

Given business travel expenses from all sources — including hotels, airfare etc. — was clocked at more than $ 1.25 trillion in 2019, video conferencing has a lot to disrupt.

Last point: Zoom surveyed the marketplace before picking Oracle, as any prudent business should. For several years, Oracle has been battling with lower-cost competitors for primacy in the infrastructure space. Many businesses have gone for lower-cost options, not being completely comfortable with Oracle’s claim of cost superiority because of better throughput. However, right now Zoom is pushing about 7 petabytes of data through Oracle Cloud Infrastructure daily, equivalent to about 93 years’ worth of HD video.

Oracle still might need to battle with others over total cost of ownership and similar metrics, but suddenly its capacity and scalability no longer are issues.
end enn Zoom Oracles Its Way to Center Stage

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.


Denis%20Pombriant Zoom Oracles Its Way to Center Stage
Denis Pombriant is a well-known CRM industry analyst, strategist, writer and speaker. His new book, You Can’t Buy Customer Loyalty, But You Can Earn It, is now available on Amazon. His 2015 book, Solve for the Customer, is also available there.
Email Denis.

Let’s block ads! (Why?)

CRM Buyer

Read More

Storage 101: Data Center Storage Configurations

April 14, 2020   BI News and Info
SimpleTalk Storage 101: Data Center Storage Configurations

The series so far:

Today’s IT teams are turning to a variety of technologies to provide storage for their users and applications. Not only does this include the wide range of hard disk drives (HDDs) and solid-state drives (SSDs), but also technologies such as cloud storage, software-defined storage, and even converged, hyperconverged, or composable infrastructures.

Despite the various options, many data centers continue to rely on three traditional configurations: direct-attached storage (DAS), network-attached storage (NAS), and the storage area network (SAN). Each approach offers both advantages and disadvantages, but it’s not always clear when to use one over the other or the role they might play in more modern technologies such as cloud storage or hyperconverged infrastructures (HCIs).

This article explores the three configurations in order to provide you with a better sense of how they work and when to use them. Later in this series, I’ll cover the more modern technologies so you have complete picture of the available options and what storage strategies might be best for your IT infrastructure. Keep in mind, however, that even with these technologies, DAS, NAS, and SAN will likely still play a vital role in the modern data center.

Direct-Attached Storage

As the name suggests, DAS is a storage configuration in which HDDs or SSDs are attached directly to a computer, rather than connecting via a network such as Ethernet, Fibre Channel, or InfiniBand. DAS typically refers to HDDs or SSDs. Other storage types, such as optical or tape drives, can theoretically be considered DAS if they connect directly to the computer, but references to DAS nearly always refer to HDDs or SSDs, including those in this article.

DAS can connect to a computer internally or externally. External DAS can be a single drive or part of an array or RAID configuration. Whether internal or external, the DAS device is dedicated to and controlled by the host computer.

A computer’s DAS drive can be shared so that other systems can access the drive across the network. Even in this case, however, the computer connected to the drive still controls that drive. Other systems cannot connect to the drive directly but must communicate with the host computer to access the stored data.

DAS connects to a computer via an interface such as Serial-Attached SCSI (SAS), Serial Advanced Technology Attachment (SATA), Small Computer System Interface (SCSI), or Peripheral Component Internet Express (PCIe). Along with other storage technologies, the interface can have a significant impact on drive performance and is an important consideration when choosing a DAS drive. (See the first article in this series for information about interfaces and related storage technologies.)

Some IT teams turn to DAS because it typically provides better performance than networked storage solutions such as NAS and SAN. When using DAS, the host server does not need to contend with potential network bottlenecks such as sluggish network speed or network congestion, and the data is by definition in close proximity to the server. Other systems that connect to the host might run into network issues, but the host itself—and the applications that run on it—have unencumbered access to data hosted on DAS.

DAS is also cheaper and easier to implement and maintain than networked systems such as NAS or SAN. A DAS device can often be implemented through a simple plug-and-play operation, with little administrative overhead. Because DAS storage includes a minimal number of components, other than the SSD or HDD itself, the price tag tends to be much lower than the networked alternatives.

DAS is not without its downsides, however. Because a server can support only a relatively small number of expansion slots or external ports, DAS has limited scalability. In addition, limitations in the server’s compute resources can also impact performance when sharing a drive, as can the data center’s network if contention issues arise. DAS also lacks the type of advanced management and backup features provided by other systems.

Despite these disadvantages, DAS can still play a vital role in some circumstances. For example, high-performing applications or virtualized environments can benefit from DAS because it’s generally the highest performance option, and DAS eliminates potential network bottlenecks. In addition, small-to-medium sized businesses—or departments within larger organizations—might turn to DAS because it’s relatively simple to implement and manage and costs less.

DAS can also be used in hyperscale systems such as Apache Hadoop or Apache Kafka to support large, data-intensive workloads that can be scaled out across a network of distributed computers. More recently, DAS has been gaining traction in HCI appliances, which are made up of multiple server nodes that include both compute and storage resources. The usable storage in each node is combined into a logical storage pool for supporting demanding workloads such as virtual desktop infrastructures (VDIs).

Network-Attached Storage

NAS is a file-level storage device that enables multiple users and applications to access data from a centralized system via the network. With NAS, users have a single access point that is scalable, relatively easy to set up, and cheaper than options such as SAN. NAS also includes built-in fault tolerance, management capabilities, and security protections, and it can support features such as replication and data deduplication.

A NAS device is an independent node on the local area network (LAN) with its own IP address. It is essentially a server that contains multiple HDDs or SSDs, along with processor and memory resources. The device typically runs a lightweight operating system (OS) that manages data storage and file sharing, although in some cases it might run a full OS such as Windows Server or Linux.

Users and applications connect to a NAS device over a TCP/IP network. To facilitate data transport, NAS also employs a file transfer protocol. Some of the more common protocols are Network File System (NFS), Common Internet File System (CIFS), and Server Message Block (SMB). However, a NAS device might also support Internetwork Packet Exchange (IPX), NetBIOS Extended User Interface (NetBEUI), Apple Filing Protocol (AFP), Gigabit Ethernet (GigE), or one of several others. Most NAS devices support multiple protocols.

NAS devices are generally easy to deploy and operate and relatively inexpensive when compared to SANs. In addition, users and applications running on the same network can easily access their files, without the limitations they might encounter if retrieving data from DAS. NAS devices can also be scaled out or integrated with cloud services. In addition, they provide built-in redundancy while offering a great deal of flexibility.

That said, a NAS device must compete with other traffic on the network, so contention can be an issue, especially if network bandwidth is limited. It should be noted, however, that NAS is often configured on private networks, which can help mitigate contention issues. However, too many users can impact storage performance, not only on the network, but also in the NAS device itself. Many NAS devices use HDDs, rather than SSDs, increasing the risk of I/O contention as more users try to access storage.

Because of the network and concurrency issues, NAS is often best suited for small-to-medium sized businesses or small departments within larger organizations. NAS might be used for distributing email, collaborating on spreadsheets, or streaming media files. NAS can also be used for network printing, private clouds, disaster recovery, backups, file archives, or any other use cases that can work within NAS’s limitations, without overwhelming the network or file system.

When deciding whether to implement a NAS device, you should consider the number of users, types of applications, available network bandwidth, and any other factors specific to your environment. DAS might be the optimal choice because it’s typically more performant, cheaper, and easier to set up than NAS. On the other hand, you might consider looking to a SAN for increased scalability and additional management features.

Storage Area Network

A SAN is a dedicated, high-speed network that interconnects one or more storage systems and presents them as a pool of block-level storage resources. In addition to the storage arrays themselves, a SAN includes multiple application servers for managing data access, storage management software that runs on those servers, host bus adapters (HBAs) to connect to the dedicated network, and the physical components that make up that network’s infrastructure, which include high-speed cabling and special switches for routing traffic.

SAN storage arrays can be made up of HDDs or SSDs or a combination of both in hybrid configurations. A SAN might also include one or more tape drives or optical drives. The management software consolidates the different storage devices into a unified resource pool, which enables each server to access the devices as though they were directly connected to that server. Each server also interfaces with the main LAN so client systems and applications can access the storage.

There is a widespread myth that SANs are high-performing systems, but historically this has rarely been true. In fact, slow-performing SANs are ubiquitous across data centers and are first and foremost optimized for data management, not performance. However, now that SSDs are becoming more common, hybrid or all-flash SANs are bringing performance to the forefront.

Integral to an effective SAN solution is a reliable, high-performing network capable of meeting workload demands. For this reason, many modern SANs are based on Fibre Channel, a technology for building network topologies that can deliver high bandwidth and exceptional throughput, with speeds up to 128 gigabits (16 GB) per second. Unfortunately, Fibre Channel is also known for being complex and pricey, causing some organizations to turn to alternatives such as Internet SCSI (iSCSI), Fibre Channel over Ethernet (FCoE), or even NVMe over Fabrics (NVMe-oF).

With the right network topology and internal configuration in place, a SAN can deliver a block-level storage solution that offers high availability and scalability, possibly even high performance. A SAN includes centralized management, failover protection, and disaster recovery, and it can improve storage resource utilization. Because a SAN runs on a dedicated network, the LAN doesn’t have to absorb the SAN-related traffic, eliminating potential contention.

However, a SAN is a complex environment that can be difficult to deploy and maintain, often requiring professionals with specialized skill sets. This alone is enough to drive up costs, but the SAN components themselves can also be pricey. An IT team might try to reduce costs by cutting back in such areas as Fibre Channel or licensed management capabilities, but the result could be lower performance or more application maintenance.

For many organizations—typically larger enterprises—the costs and complexities are worth the investment, especially when dealing with numerous or massive datasets and applications that support a large number of users. SANs can benefit use cases such as email programs, media libraries, database management systems, or distributed applications that require centralized storage and management.

Organizations looking for networked storage solutions often weigh SAN against NAS, taking into account complexity, reliability, performance, management features, and overall cost. NAS is certainly cheaper and easier to deploy and maintain, but it’s not nearly as scalable or fast. For example, a NAS uses file storage, and a SAN uses block storage, which incurs less overhead, although it’s not as easy to work with. Your individual circumstances will determine which storage system is the best fit. (For information about the differences between block and file storage, refer to the first article in this series).

Moving Ahead with DAS, NAS, and SAN

Like any storage technology, SANs are undergoing a transition. For example, vendors now offer something called unified SAN, which can support both block-level and file-level storage in a single solution. Other technologies are also emerging for bridging the gap between NAS and SAN. One example is VMware vSphere, which makes it possible to use NAS and SAN storage in the same cluster as vSAN, VMware’s virtual SAN technology. Another approach to storage is the converged SAN, which implements the SAN environment on the same network used for other traffic, thus eliminating the network redundancy that comes with a more conventional SAN.

For many organizations, traditional DAS, NAS, and SAN solutions properly sized and configured will handle their workloads with ease. If that’s insufficient, they might consider newer technologies that enhance these core configurations, such as converged or hyperconverged infrastructures. Today’s organization can also take advantage of such technologies as cloud storage, object storage, or software-defined storage, as well as the various forms of intelligent storage that are taking hold of the enterprise.

There are, in fact, no shortage of storage options, and those options grow more sophisticated and diverse every day, as technologies continue to evolve and mature in an effort to meet the needs of today’s dynamic and data-intensive workloads.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Putting CX at the Center of Testing Strategies

August 28, 2019   CRM News and Info

From e-commerce to banking applications to healthcare systems — and everything in between — if it’s digital, users expect it to work at every interaction, and on every possible platform and operating system.

However, despite the need to provide a digital experience that delights, Gartner research suggests that only 18 percent of companies are delivering their desired customer experience.

A big part of this gap between expectation and reality is that digital businesses depend on the quality of their software and applications, which frequently do not perform as they should. In an age when digital transformation is so dependent upon better quality software, testing has never been more critical. However, for the last decade, testing has focused on verification — that is, does it work? — rather than validation, meaning, does it do what I expect and want?

As companies progress in their digital transformation journeys, it’s critical that testing focus on answering the latter question. In other words, software testing must pivot from simply checking that an application meets technical requirements to ensuring that it delivers better user experiences and business outcomes.

Verification Validation

Testing needs to shift from a verification-driven activity to a continuous quality process. The goal is to understand how customer experiences and business outcomes are affected by the technical behavior of the application. More than this, though, it’s about identifying opportunities for improvements and predicting the business impact of those improvements.

Verification testing merely checks that the code complies with a specification provided by the business. These specifications are assumed to be perfect and completely replicate how users interact with and use the software.

However, there’s simply no way a specification writer could know how users will react to every part of the software or capture everything that could impact customer experience. Even if there were, it would make the software development painfully slow. By adopting this approach, the assumption is that validation also has been done as a result. However, this is a mirage rather than a reality, and has resulted in the customer experience being ignored from a software testing perspective.

Companies must abandon the outdated approach of testing only whether the software works, and instead embrace a strategy that evaluates the user perspective and delivers insights to optimize their experiences. If you care about your user experience and if you care about business outcomes, you need to be testing the product from the outside in, the way a user does. Only then can you truly evaluate the user experience.

A user-centric approach to testing ensures that user interface errors, bugs and performance issues are identified and addressed long before the application is live and has the chance to have a negative impact on the customer experience and, potentially, brand perception. Fast, reliable websites and applications increase engagement, deliver revenue, and drive positive business outcomes. Ensuring that these objectives are met should be an essential part of modern testing strategies.

For example, a banking app may meet all the specification criteria, but if it requires customers to add in their account details each time they want to access their account, they will lose patience quickly, stop using the app, and ultimately move to a competitor. This is exactly why businesses need to rethink how they evaluate software and applications and re-orient their focus to meet their customers’ expectations and needs.

If businesses want to close the customer experience gap, then they need to rethink how they evaluate software and applications. Validation testing should be a foundational element of testing strategies. However, organizations need to start testing the user experience and modernizing their approach so that they can keep up with the pace of DevOps and continuous delivery. This is an essential driver behind digital transformation.

Historically the only organizations carrying out validation testing have been teams with experienced manual exploratory testing capabilities. Exploratory testing evaluates functionality, performance and usability, and it takes into account the entire universe of tests. However, it’s not transparent, qualitative or replicable, and it’s difficult to include within a continuous development process. Manual exploratory testing is expensive to scale, as it’s time-consuming and the number of skilled testers is limited.

Customer-Driven Testing 101

Customer-driven testing is a new approach that automates exploratory testing for scalability and speed. Fundamentally, customer-driven testing focuses on the user experience rather than the specification. It also helps accelerate traditional specification-driven testing. Artificial intelligence (AI) and machine learning (ML) combined with model-based testing have unlocked the ability to carry out customer-driven testing.

The intelligent automation of software testing enables businesses to test and monitor the end-to-end digital user experience continuously; it analyzes apps and real data to auto-generate and execute user journeys. It then creates a model of the system and user journeys, and automatically generates test cases that provide robust coverage of the user experience — as well as of system performance and functionality.

Through automated feedback loops, you can zoom in on problems quickly and address them. Once that is in place, the intelligent automation can go even further — to where it builds the model itself by watching and understanding the system. It hunts for bugs looking at the app, the testing, and development to understand the risk.

It assesses production to clarify what matters to the business. This intelligence around risk factors and business impact direct the testing to focus in the right places. Unlike the mirage of testing to a specification, the actual customer journey drives the testing.

AI and ML technologies recommend the tests to execute, learning continuously and performing intelligent monitoring that can predict business impacts and enable development teams to fix issues before they occur. These cutting-edge technologies are core components of customer-driven testing, but another essential element is needed: human intelligence.

The Human Factor

Customer-driven testing doesn’t mean the death of the human tester. Machines are great at automating processes and correlating data but are not able to replicate the creative part of testing. This involves interpreting the data into actual human behavior and developing hypotheses about where problems are going to be.

The tester needs to provide hints and direction, as machines can’t replicate their experiences and intuition. Human creativity is essential to guide the customer-driven testing process.

Automated analytics and test products provide vast volumes of data about how a user behaved at the human-to-app interface, but it requires a human to understand why the person took that action. The human will set the thresholds for errors and will pull the levers and guide the algorithms, for example. Customer-driven testing is possible only with human testers augmented by state-of-the-art technology.

CX and the Path Forward

Digitization is rapidly changing the way companies and customers interact with each other. Understanding and optimizing the customer experience and ensuring apps deliver on business goals are now mission-critical for digital businesses. Practices that merely validate that software works must be retired, or organizations run the risk of lagging behind their competitors.

A new approach to testing is essential. The combination of AI-fueled testing coupled with human testers directing the automation makes customer-driven testing possible. If businesses want to close the customer experience gap, then they have to pivot and look at the performance of their digital products through the eyes of the customer. If software truly runs the world, then you need to make sure that it’s delighting your customers rather than merely working.
end enn Putting CX at the Center of Testing Strategies


Antony%20Edwards Putting CX at the Center of Testing Strategies
Antony Edwards is COO of
Eggplant.

Let’s block ads! (Why?)

CRM Buyer

Read More

Next-Generation Competence Center (Part III): Aligning Business And IT

August 1, 2019   BI News and Info

In my previous article in this series, I examined the question: “Where is the business going, and consequently, what should the role of IT be?”

In this article, I will share an approach to align the business and IT strategy in a way that will keep the promise of long-lasting corporate brand identity.

You can say, “Where business goes, IT follows.” The problem is that the “where” (i.e., a desired target state) is hiding the “who” (i.e., who you are and who you want to be by reaching a new destination).

This reminds me of an evergreen quote from Lucius Annaeus Seneca: “Our plans miscarry because they have no aim. When a man does not know what harbor he is making for, no wind is the right wind.”

The point is that the business and IT aims must be aligned, and the two identities should converge. The question is how to do that if what IT wants to do first is not necessarily what the business is asking for.

The conflicting identities of business and IT popped up a few months ago when I was breaking the ice with customers who were interested in managing their systems with a more agile IT competence center. They wanted this competence center to run and support the deployment of SAP S/4HANA, which was going live imminently.

I was looking for a model that would help them plan and build for their system landscape’s evolution. I ended up combining three key philosophies from very different sources:

  1. Logical levels learned in The Art & Science of Coaching training, which provides the structure for moving from an inspiring vision to more concrete actions in specific times and places
  1. The corporate brand identity matrix, from the Harvard Business Review, to find the “unique twist” of a company’s identity based on its brand or core values
  1. Diversity and inclusion concepts learned from SAP’s employee training, which adds the “fairness” spices

Starting with “logical levels”

The student guide for the Erickson Academy’s The Art & Science of Coaching shows a pyramid that elegantly sorts the structure and dependencies of “five plus one” logical levels:

Vision is the “plus one” level of the pyramid, which inspires identity (the top logical level) rooted in core values representing the reason specific behaviors (skills and actions) define the different ways to play in a given context.

Those general concepts also work well for business and IT people working together in alignment for a common vision, mission, and strategy:

  1. Identity: Who are you now? What sort of person/organization would you rather be? (Shifting into a new role)
  1. Values: Why is this important? What values does it have? (Values behind identity and vision)
  1. Skills: How will you achieve it? What capabilities do you have? What skills do you need to develop? (Knowledge, experience, methods, and tools)
  1. Actions/behaviors: What actions need to be taken? What steps could you take to support X? (Action plan, steps, behaviors)
  1. Environment: Where will you want this? When will you do it? (Time and geography)

Deriving IT mission, vision, and strategy from “corporate and brand identity”

Stephen A. Greyser and Mats Urde, in their HBR article “What Does Your Corporate Brand Stand For?” (issue 97, Jan./Feb. 2019), illustrate a framework for a corporate brand identity definition.

The brand core is at the center of the framework, surrounded by eight elements, making room for nine questions to be answered.

alfredo blog 1 Next Generation Competence Center (Part III): Aligning Business And IT

After reading this article, I thought of my current next-generation competence center design project and decided to first answer the corporate identity questions from the business point of view, then answer the same questions from the IT point of view.

My IT counterparts were dubious after the first attempt to answer the questions from the IT point of view. Later, when I insisted on doing this exercise in a timebox fashion (I stepped out of the room for less than an hour), I heard excitement and realized how well they did in filling out different forms twice, based on the different points of view.

The first time you answer the nine questions from both points of view – business (as suggested by HBR) and IT (for the sake of the next-generation IT competence center) – your answers might look sloppy or disconnected. This is because consistency must be checked along four directions, aimed at enhancing four “angles.”

The clearer and more logical your definitions (answers) and narrative (combination of answers), the more consistent the identity matrix and the stronger your identity will be. That’s provided that all the four directions are properly crossing the very same core values, i.e., core brand.

Alfredo blog 2 Next Generation Competence Center (Part III): Aligning Business And IT

In short, check the nine answers in clusters of four, as follows:

  1. Strategy: Is the mission (what you promise) consistent with where you want to be (vision)?
  1. Competition: Is what you offer (your value proposition) unique due to particularly distinctive skills?
  1. Interaction: Does the way you interact (your relationships) delight your customers thanks to uncommon behaviors rooted in a well-known culture?
  1. Communication: Is your communication style fostered by unique personality traits?

Repeat this process a few times and try to discard concepts that don’t contribute to clear answers. Sometimes less is more.

Adding “diversity and inclusion”

Building a culture of diversity and inclusion plays an important role in aligning business and IT strategy.

Diversity and inclusion training can stretch minds, enrich vocabulary, and enhance the ability to think differently. It will create a balance between individual and collective culture and help you achieve measurable targets, fueling a new, more open, and correct way of working.

Here are some things to consider when building a culture of diversity and inclusion:

  1. Culture elements: Shared values, knowledge, experiences, beliefs, and behaviors
  1. Culture transmission: Socializing agents (people we are with) and institutions (school, government)
  1. Cultural attitudes: Role models made of different attributes, to be tuned up to better fit a given target cultural attitude:

    • Status: Rank-oriented (do what the boss commands) vs. equality-oriented (challenging/arguing with boss is OK)
    • Identity: Individual-oriented (what I/he did) vs. group-oriented (the result we made)
    • Activity: Task-oriented (duty first) vs. relationship-oriented (people first).
    • Risk: Risk-taker (change-driven) vs. stability-seeking (steady state is better)
    • Communication: Direct (talk/write clearly – facts only) vs. indirect (room for interpretation – body language)

Thinking more and deeply about who we are and who we want to be is always good for us and for the people who want to take action.

Are you intrigued or skeptical?

I’ll be happy to hear your comments and adjust the recipe for aligning business and IT strategy.

Stretch your mind on the Next-Generation Competence Center by reading my previous articles on the topic, “Part I” and  “Part II.”

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Google debuts better transcription, endless streaming, and more in Contact Center AI

July 23, 2019   Big Data

Last July, during its Cloud Next conference in San Francisco, Google unveiled Contact Center AI. The machine learning-powered customer support toolkit taps Dialogflow (a conversational experiences development platform) and Cloud Speech-to-Text (a suite of audio-to-text technologies) to interact with callers over the phone. It has been a long time coming, but this week the tech giant bolstered the nascent service with a raft of features that vastly improve speech recognition accuracy.

“Contact centers are critical to many businesses, and the right technologies play an important role in helping them provide outstanding customer care,” wrote product managers Dan Aharon and Shantanu Misra in a blog post. “We’re excited to see how these improvements to speech recognition improve the customer experience for contact centers of all shapes and sizes.”

Automatic speech adaptation

 Google debuts better transcription, endless streaming, and more in Contact Center AI

Contact Center AI’s new Auto Speech Adaptation feature, which is available in beta, targets scenarios where Dialogflow agents’ speech recognition systems might confuse similar-sounding words. It takes into account context — specifically training phrases, entities, and other agent-specific information — to respond appropriately using a learning process known as speech adaptation. For instance, if a caller attempts to arrange a product return, Contact Center AI will leverage its knowledge of the returns process to avoid mistaking the word “mail” for “nail.”

 Google debuts better transcription, endless streaming, and more in Contact Center AI

Auto Speech Adaptation is switched off by default. You’ll find it in the Dialogflow console.

Baseline model improvements

Google recently launched in preview premium speech-to-text models tuned to specific use cases, and in February it made one of them — a phone model optimized for two- to four-person conversations — generally available. The Mountain View company claimed at the time that this model had 62% fewer transcription errors compared with its predecessor’s 54%. Today, Google revealed that its engineers have further optimized the model for short utterances in U.S. English. The model is now 15% more accurate relative to the previously announced improvements.

“Applying speech adaptation can also provide additional improvements on top of that gain,” wrote Aharon and Misra. “We’re constantly adding more quality improvements to the roadmap — an automatic benefit to any IVR or phone-based virtual agent, without any code changes needed — and will share more about these updates in [the] future.”

Better transcription and endless streaming

Increased contextual awareness and enhanced speech-to-text aren’t the only new natural language understanding improvements coming down the Contact Center AI pipeline. Google debuted in beta today “richer” manual speed adaptation and entity classes, in addition to expanded phrase limits, endless streaming, and more.

There’s a trio of new features within SpeechContext parameters, the collection of Cloud Speech-to-Text settings and toggles that tailor transcriptions to businesses’ and verticals’ vernaculars. SpeechContext classes — prebuilt entities reflecting concepts like digit sequences, addresses, numbers, and money denominations — optimize ASR for a list of words at once. As for SpeechContext boost, it helps adjust speech adaptation strength while cutting down on the number of false positives — i.e., when a phrase wasn’t mentioned but appears in a transcript. Lastly, SpeechContext now supports up to 5,000 phrase hints per API request (up from 500), increasing the probability uncommon words or phrases will be captured by ASR.

 Google debuts better transcription, endless streaming, and more in Contact Center AI

Above: Using SpeechContext classes to refine transcription.

Image Credit: Google

Perhaps more significantly, Cloud Speech-to-Text, which since launch has only supported streaming audio in one-minute increments, can now process sessions up to five minutes in length and resume streaming where the previous sessions left off. (Google notes that this effectively makes live automatic transcription infinite in length.) Additionally, Cloud Speech-to-Text now natively supports the MP3 file format; previously, MP3 files had to be expanded into the LINEAR16 format prior to processing.

“We’re excited to see how these improvements to speech recognition improve the customer experience for contact centers of all shapes and sizes — whether you’re working with one of our partners to deploy the Contact Center AI solution or taking a DIY approach using our conversational AI suite,” wrote Aharon and Misra.

The veritable slew of announcements follows the debut of Calljoy, a graduate from Google’s Area 120 incubator that aims to help small businesses harness language models to automate incoming call management. More recently, Google made available in beta Document Understanding AI, a serverless platform that automatically classifies and structures data within scanned physical and digital documents, and Vision Product Search, which uses the company’s Cloud Vision technology to enable stores to create Google Lens-type smartphone experiences.

Contact Center AI remains in beta, with partners including 8×8, Avaya, Salesforce, Accenture, Cisco​, Five9, Genesys, Mitel, Twilio, and Vonage.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

DoD’s Joint AI Center to open-source natural disaster satellite imagery data set

June 24, 2019   Big Data
 DoD’s Joint AI Center to open source natural disaster satellite imagery data set

As climate change escalates, the impact of natural disasters is likely to become less predictable. To encourage the use of machine learning for building damage assessment this week, Carnegie Mellon University’s Software Engineering Institute and CrowdAI — the U.S. Department of Defense’s Joint AI Center (JAIC) and Defense Innovation Unit — shared plans to open-source a labeled data set of some of the largest natural disasters in the past decade. Called xBD, it covers the impact of disasters around the globe, like the 2010 earthquake that hit Haiti.

“Although large-scale disasters bring catastrophic damage, they are relatively infrequent, so the availability of relevant satellite imagery is low. Furthermore, building design differs depending on where a structure is located in the world. As a result, damage of the same severity can look different from place to place, and data must exist to reflect this phenomenon,” reads a research paper detailing the creation of xBD.

xBD includes approximately 700,000 satellite images of buildings before and after eight different kinds of natural disasters, including earthquakes, wildfires, floods, and volcanic eruptions. Covering about 5,000 square kilometers, it contains images of floods in India and Africa, dam collapses in Laos and Brazil, and historic deadly fires in California and Greece.

The data set will be made available in the coming weeks alongside the xView 2.0 Challenge to unearth additional insights from xBD, coauthor and CrowdAI machine learning lead Jigar Doshi told VentureBeat. The data set collection effort was informed by the California Air National Guard’s approach to damage assessment from wildfires.

“This process informed a set of criteria that guided the specific data we targeted for inclusion in the data set, as well as weaknesses of the current damage assessment processes. Each disaster is treated in isolation. The process human analysts use is not repeatable or reproducible across different disaster types. This irreproducible data presents a major issue for use by machine learning algorithms; different disasters affect buildings in different ways, and building structures vary from country to country, so determinism in the assessment is a necessary property to ensure machine learning algorithms can learn meaningful patterns,” the report reads.

The group also released Joint Damage Scale, a building damage assessment scale that labels affected buildings as suffering “minor damage,” “major damage,” or “destroyed.” The images were drawn from DigitalGlobe’s Open Data program.

xBD was one of dozens of works presented earlier this week at Computer Vision and Pattern Recognition (CVPR) 2019, held in conjunction with the Computer Vision for Global Challenges workshop. The workshop received submissions from 15 countries.

Other work presented at the conference included research on things like spatial apartheid in South Africa, deforestation prevention in Chile, poverty prediction from satellite imagery, and penguin colony analysis in Antarctica.

In addition to its contributions to xBD, CrowdAI worked with Facebook AI last year to develop systems for damage assessment methods derived from Santa Rosa fire and Hurricane Harvey satellite imagery. This project was based on work from the DeepGlobe satellite imagery challenge from CVPR 2018.

Facebook AI researchers are also using satellite imagery and computer vision that identifies buildings in order to create global population density maps. The initiative started in April with a map of Africa hosted by the United Nations Humanitarian Data Exchange.

Also part of CVPR this year, researchers from Wageningen University in the Netherlands presented work that explores weakly supervised methods of wildlife detection from satellite imagery, technology with applications for animal conservation.

Other highlights from CVPR 2019:

  • Microsoft GANs that can create images and storyboards from captions
  • Nvidia AI that improves existing computer vision systems and existing object detection systems
  • AI that can see around corners
  • Cruise open-sourced Webviz, a tool for robotics data analysis

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • Bad Excuses
    • Understanding CRM Features-Better Customer Engagement
    • AI Weekly: Continual learning offers a path toward more humanlike AI
    • The Easier Way For Banks To Handle Data Security While Working Remotely
    • 3 Ways Data Virtualization is Evolving to Meet Market Demands
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited