• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: State

AI Weekly: The state of machine learning in 2020

November 27, 2020   Big Data
 AI Weekly: The state of machine learning in 2020

When it comes to customer expectations, the pandemic has changed everything

Learn how to accelerate customer service, optimize costs, and improve self-service in a digital-first world.

Register here

It’s hard to believe, but a year in which the unprecedented seemed to happen every day is just weeks from being over. In AI circles, the end of the calendar year means the rollout of annual reports aimed at defining progress, impact, and areas for improvement.

The AI Index is due out in the coming weeks, as is CB Insights’ assessment of global AI startup activity, but two reports — both called The State of AI — have already been released.

Last week, McKinsey released its global survey on the state of AI, a report now in its third year. Interviews with executives and a survey of business respondents found a potential widening of the gap between businesses that apply AI and those that do not.

The survey reports that AI adoption is more common in tech and telecommunications than in other industries, followed by automotive and manufacturing. More than two-thirds of respondents with such use cases say adoption increased revenue, but fewer than 25% saw significant bottom-line impact.

Along with questions about AI adoption and implementation, the McKinsey State of AI report examines companies whose AI applications led to EBIT growth of 20% or more in 2019. Among the report’s findings: Respondents from those companies were more likely to rate C-suite executives as very effective, and the companies were more likely to employ data scientists than other businesses were.

At rates of difference of 20% to 30% or more compared to others, high-performing companies were also more likely to have a strategic vision and AI initiative road map, use frameworks for AI model deployment, or use synthetic data when they encountered an insufficient amount of real-world data. These results seem consistent with a Microsoft-funded Altimeter Group survey conducted in early 2019 that found half of high-growth businesses planned to implement AI in the year ahead.

If there was anything surprising in the report, it’s that only 16% of respondents said their companies have moved deep learning projects beyond a pilot stage. (This is the first year McKinsey asked about deep learning deployments.)

Also surprising: The report showed that businesses made little progress toward mounting a response to risks associated with AI deployment. Compared with responses submitted last year, companies taking steps to mitigate such risks saw an average 3% increase in response to 10 different kinds of risk — from national security and physical safety to regulatory compliance and fairness. Cybersecurity was the only risk that a majority of respondents said their companies are working to address. The percentage of those surveyed who consider AI risks relevant to their company actually dropped in a number of categories, including in the area of equity and fairness, which declined from 26% in 2019 to 24% in 2020.

McKinsey partner Roger Burkhardt called the survey’s risk results concerning.

“While some risks, such as physical safety, apply to only particular industries, it’s difficult to understand why universal risks aren’t recognized by a much higher proportion of respondents,” he said in the report. “It’s particularly surprising to see little improvement in the recognition and mitigation of this risk, given the attention to racial bias and other examples of discriminatory treatment, such as age-based targeting in job advertisements on social media.”

Less surprising, the survey found an uptick in automation in some industries during the pandemic. VentureBeat reporters have found this to be true across industries like agriculture, construction, meatpacking, and shipping.

“Most respondents at high performers say their organizations have increased investment in AI in each major business function in response to the pandemic, while less than 30% of other respondents say the same,” the report reads.

The McKinsey State of AI in 2020 global survey was conducted online from June 9 to June 19 and garnered nearly 2,400 responses, with 48% reporting that their companies use some form of AI. A 2019 McKinsey survey of roughly the same number of business leaders found that while nearly two-thirds of companies reported revenue increases due to the use of AI, many still struggled to scale its use.

The other State of AI

A month before McKinsey published its business survey, Air Street Capital released its State of AI report, which is now in its third year. The London-based venture capital firm found the AI industry to be strong when it comes to company funding rounds, but its report calls centralization of AI talent and compute “a huge problem.” Other serious problems Air Street Capital identified include ongoing brain drain from academia to industry and issues with reproducibility of models created by private companies.

A number of the report’s conclusions are in line with a recent analysis of AI research papers that found the concentration of deep learning activity among Big Tech companies, industry leaders, and elite universities is increasing inequality. The team behind this analysis says a growing “compute divide” could be addressed in part by the implementation of a national research cloud.

As we inch toward the end of the year, we can expect more reports on the state of machine learning. The state of AI reports released in the past two months demonstrate a variety of challenges but suggest AI can help businesses save money, generate revenue, and follow proven best practices for success. At the same time, researchers are identifying big opportunities to address the various risks associated with deploying AI.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Acquisition of One Microsoft Dynamics Partner by Another Can Benefit State and Local Governments

September 24, 2020   Microsoft Dynamics CRM

AKA Enterprise Solutions was acquired in August 2020 by HSO, a global Microsoft cloud business applications partner. AKA has a strong state and local government (SLG) practice, and because of that, this acquisition means that HSO is now one of the largest Microsoft Dynamics 365 Partners focused on SLG in the U.S.
AKA has a strong reputation in the U.S. SLG and Microsoft communities for excellence in putting the Microsoft solution stack, including Dynamics 365 Sales (CRM), to work to help city, county, and state governments (and related agencies) achieve digital transformation.
HSO in the U.S. now has an even stronger team of talent and experience, enabling us to provide even better support and services. We :
• Deep public sector/government experience with city, county, and state governments as well as state agencies, including McHenry County, IL, the Los Angeles County Board of Supervisors, The City of Redmond, WA, and the Washington State Department of Ecology
• Experience, expertise, and 100% focus on the Microsoft ecosystem, including cloud/infrastructure, modern workplace, business applications, data/analytics
• Proven intellectual property, including citizen engagement, improving services/government programs, and modernizing operations
• Development tools and resources that deliver fast time to value
• 24/7 managed services and support, delivered by public sector experts
Do you want to talk about how we can help your government or agency with its challenges and goals for transformation? Contact our SLG experts.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

AI Weekly: A biometric surveillance state is not inevitable, says AI Now Institute

September 5, 2020   Big Data
 AI Weekly: A biometric surveillance state is not inevitable, says AI Now Institute

Automation and Jobs

Read our latest special issue.

Open Now

In a new report called “Regulating Biometrics: Global Approaches and Urgent Questions,” the AI Now Institute says that there’s a growing sense among regulation advocates that a biometric surveillance state is not inevitable.

The release of AI Now’s report couldn’t be more timely. As the pandemic drags on into the fall, businesses, government agencies, and schools are desperate for solutions that ensure safety. From tracking body temperatures at points of entry to issuing health wearables to employing surveillance drones and facial recognition systems, there’s never been a greater impetus for balancing the collection of biometric data with rights and freedoms. Meanwhile, there’s a growing number of companies selling what seem to be rather benign products and services that involve biometrics, but that could nonetheless become problematic or even abusive.

The trick of surveillance capitalism is that it’s designed to feel inevitable to anyone who would deign to push back. That’s an easy illusion to pull off right now, at a time when the reach of COVID-19 continues unabated. People are scared and will reach for a solution to an overwhelming problem, even if it means acquiescing to a different one.

When it comes to biometric data collection and surveillance, there’s tension and often a lack of clarity around what’s ethical, what’s safe, what’s legal — and what laws and regulations are still needed. The AI Now report methodically lays out all of those challenges, explains why they’re important, and advocates for solutions. Then it gives shape and substance to them through eight case studies that examine biometric surveillance in schools, police use of facial recognition technologies in the U.S. and U.K., national efforts to centralize biometric information in Australia and India, and more.

There’s a certain responsibility incumbent on everyone — not just politicians, entrepreneurs, and technologists, but all citizens —  to acquire a working understanding of the sweep of issues around biometrics, AI technologies, and surveillance. This report serves as a reference for the novel questions that continue to arise. It would be an injustice to the 111-page document and its authors to summarize the whole of the report in a few hundreds words, but it includes several broad themes.

The laws and regulations about biometrics as they pertain to data, rights, and surveillance are lagging behind the development and implementation of the various AI technologies that monetize them or use them for government tracking. This is why companies like Clearview AI proliferate — what they do is offensive to many, and may be unethical, but with some exceptions it’s not illegal.

Even the very definition of what biometric data is remains unsettled. There’s a big push to pause these systems while we create new laws and reform or update others — or ban the systems entirely because some things shouldn’t exist and are perpetually dangerous even with guardrails.

There are practical considerations that can shape how average citizens, private companies, and governments understand the data-powered systems that involve biometrics. For example, the concept of proportionality is that “any infringement of privacy or data-protection rights be necessary and strike the appropriate balance between the means used and the intended objective,” says the report, and that a “right to privacy is balanced against a competing right or public interest.”

In other words, the proportionality principle raises the question of whether a given situation warrants the collection of biometric data at all. Another layer of scrutiny to apply to these systems is purpose limitation, or “function creep” — essentially making sure data use doesn’t extend beyond the original intent.

One example the report gives is the use of facial recognition in Swedish schools. They were using it to track student attendance. Eventually the Swedish Data Protection Authority banned it on the grounds that facial recognition was too onerous for the task — it was disproportionate. And surely there were concerns about function creep; such a system captures rich data on a lot of children and teachers. What else might that data be used for, and by whom?

This is where rhetoric around safety and security becomes powerful. In the Swedish school example, it’s easy to see how that use of facial recognition doesn’t hold up to proportionality. But when the rhetoric is about safety and security, it’s harder to push back. If the purpose of the system is not taking attendance, but rather scanning for weapons or looking for people who aren’t supposed to be on campus, that’s a very different conversation.

The same holds true of the need to get people back to work safely and to keep returning students and faculty on college campuses safe from the spread of COVID-19. People are amenable to more invasive and extensive biometric surveillance if it means maintaining their livelihood with less danger of becoming a pandemic statistic.

It’s tempting to default to a simplistic position of more security equals more safety, but under scrutiny and in real-life situations, that logic falls apart. First of all: More safety for whom? If refugees at a border have to submit a full spate of biometric data, or civil rights advocates are subjected to facial recognition while exercising their right to protest, is that keeping anyone safe? And even if there is some need for safety in those situations, the downsides can be dangerous and damaging, creating a chilling effect. People fleeing for their lives may balk at those conditions of asylum. Protestors may be afraid to exercise their right to protest, which hurts democracy itself. Or schoolkids could suffer under the constant psychological burden of being reminded that their school is a place full of potential danger, which hampers mental well-being and the ability to learn.

A related problem is that regulation may happen only after these systems have been deployed, as the report illustrates using the case of India’s controversial Aadhaar biometric identity project. The report described it as “a centralized database that would store biometric information (fingerprints, iris scans, and photographs) for every individual resident in India, indexed alongside their demographic information and a unique twelve-digit ‘Aadhaar’ number.” The program ran for years without proper legal guardrails. In the end, instead of using new regulations to roll back the system’s flaws or dangers, lawmakers simply essentially fashioned the law to fit what had already been done, thereby encoding the old problems into law.

And then there’s the issue of efficacy, or how well a given measure works and whether it’s helpful at all. You could fill entire tomes with research on AI bias and examples of how, when, and where those biases cause technological failures and result in abuse of the people upon whom the tools are used. Even when models are benchmarked, the report notes, those scores may not reflect how well those models perform in real-world applications. Fixing bias problems in AI, at multiple levels of data processing, product design, and deployment, is one of the most important and urgent challenges the field faces today.

One of the measures that can abate the errors that AI coughs up is keeping a human in the loop. In the case of biometric scanning like facial recognition, systems are meant to essentially provide leads after officers run images against a database, which humans can then chase down. But these systems often suffer from automation bias, which is when people rely too much on the machine and overestimate its credibility. That defeats the purpose of having a human in the loop in the first place and can lead to horrors like false arrests, or worse.

There’s a moral aspect to considering efficacy, too. For example, there are many AI companies that purport to be able to determine a person’s emotions or mental state by using computer vision to examine their gait or their face. Though it’s debatable, some people believe that the very question these tools claim to answer is immoral or simply impossible to do accurately. Taken to the extreme, this results in absurd research that’s essentially AI phrenology.

And finally, none of the above matters without accountability and transparency. When private companies can collect data without anyone knowing or consenting, when contracts are signed in secret, when proprietary concerns take precedent over demands for auditing, when laws and regulations between states and countries are inconsistent, and when impact assessments are optional, these crucial issues and questions go unanswered. And that’s not acceptable.

The pandemic has served to show the cracks in our various governmental and social systems and has also accelerated both the simmering problems therein and the urgency of solving them. As we go back to work and school, the biometrics issue is front and center. We’re being asked to trust biometric surveillance systems, the people who made them, and the people who are profiting from them, all without sufficient answers or regulations in place. It’s a dangerous tradeoff. But you can at least understand the issues at hand, thanks to the AI Now Institute’s latest report.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

The State Of Industry 4.0 In 2020

May 8, 2020   BI News and Info
 The State Of Industry 4.0 In 2020

Part 1 in a series on the findings of an Industry 4.0 adoption study by The MPI Group

It was in 2011 at Hannover Messe that the German government first announced a new initiative to digitize manufacturing – an initiative known as Industry 4.0. Now, less than a decade later, the uptake of Industry 4.0 – not only in Germany and Europe, but around the world – has been impressive.

At the same time, adoption is not necessarily uniform across all companies and industries. What’s clear is that Industry 4.0 capabilities now act as a competitive differentiator that helps set companies apart as the gap widens between the digital haves and have nots. Those moving forward with Industry 4.0 initiatives, in other words, are reaping the benefits of sophisticated new capabilities while the laggards are realizing the need to catch up quickly.

The MPI Group study

This is one of the findings of a recent study by The MPI Group that surveyed 679 manufacturers on Industry 4.0 adoption. MPI defines Industry 4.0 straightforwardly as the practice of organizations “embedding intelligence and/or smart devices into their operations and connecting them to the enterprise and supply chain, as well as offering new products that incorporate embedded intelligence.”

Organizations adopting such practices stand to benefit. The title of the MPI study says it all: Digitization Delivers Operations, Enterprise, and Product Improvements: Manufacturers Leverage Industry 4.0 for Increased Productivity, Revenues, and Profitability.

The importance of Industry 4.0

Overwhelmingly, respondents saw Industry 4.0 as an important initiative – with 90% saying that over the next five years it will have “significant” or “some” impact, and only 9% downplaying the impact.

Perceived value from Industry 4.0 spans three categories: financial, operational, and brand value. Respondents see the most financial value in supply chain activities (50%) while – not surprisingly – operational value is associated with operations (63%), and brand value with sales and marketing (46%).

Processes vs. products

Interestingly, there is somewhat of a split regarding how Industry 4.0 capabilities are applied to processes vs. products (subsequent blogs in this series will focus on each of these categories in more detail).

As a summary of the study states: “Nearly half of manufacturers (47%) have strategies implemented to apply Industry 4.0 to processes, but only 37% have done so for their products.” Organizations seeking to differentiate with Industry 4.0 capabilities, therefore, may find significant opportunities on the product side of the equation.

Transformation leadership roles

Who tends to take responsibility for Industry 4.0 initiatives at the ground level of an organization pursuing transformation? According to the study, respondents say the roles most likely to lead initiatives are the heads of manufacturing (51%), engineering (27%), and the CIO (20%).

When it comes to what these transformation initiatives tend to focus on, there is a fair amount of uniformity. Respondents cite factory automation (62%), AI (62%), and IoT (55%) as top concerns. Typically, the study says, initiatives “start with integration of intelligent sensors and controls into equipment and/or converting source data from analog to digital signals.” The goal, ultimately, is to break down silos across all departments and functions.

At SAP, we have long highlighted this sort of integration as playing a critical role in helping to connect business units across the design-to-operate lifecycle. To get there, however, the study finds that 54% of networks require upgrades to support machine-to-machine communications and 67% require upgrades for machine-to-enterprise communications.

Thus, while many companies see Industry 4.0 trends as having tremendous impacts on the way they operate and compete in today’s increasingly connected economy – and while many are indeed moving forward with initiatives to keep pace – work remains to be done. Organizations with vision and clear leadership will be those that succeed first.

Please stay tuned for subsequent blogs that review aspects of the MPI study as they relate to Industry 4.0 for plants and products. In the meantime, have a look at the full study here.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Storage 101: Understanding the NAND Flash Solid State Drive

March 16, 2020   BI News and Info

The series so far:

Solid-state drive (SSDs) have made significant inroads into enterprise data centers in recent years, supporting workloads once the exclusive domain of hard-disk drive (HDDs). SSDs are faster, smaller, use less energy, and include no moving parts. They’ve also been dropping in price while supporting greater densities, making them suitable for a wide range of applications.

Despite their increasing presence in the enterprise, there’s still a fair amount of confusion about how SSDs work and the features that distinguish one drive from another. Concepts such as NAND chips, multi-level cells, and floating gate technologies can be somewhat daunting if you’re still thinking in terms of rotating disks and moving actuator arms, components that have no place in the SSD.

The better you understand how SSDs operate, the more effectively you can select, deploy, and manage them in you organization. To help with that process, this article introduces you to several important SSD concepts so you have a clearer picture of the components that go into an SSD and how they work together to provide reliable nonvolatile storage.

Bear in mind, however, that an SSD is a complex piece of technology and can easily justify much more in-depth coverage than what a single article can offer. You should think of this as an introduction not a complete treatise, a starting point for building a foundation in understanding the inner workings of your SSDs.

Introducing the NAND Flash SSD

Like an HDD, an SSD is a nonvolatile storage device that store data whether or not it is connected to power. An HDD, however, uses magnetic media to store its data, whereas the SSD uses integrated electronic circuitry to retain specific charge states, which in turn map to the data bit patterns.

SSDs are based on flash memory technologies that enable data to be written, read, and erased multiple times. Flash memory comes in two varieties: NOR and NAND. Although each offers advantages and disadvantages (a discussion beyond the scope of this article), NAND has emerged as the favored technology because it delivers faster erase and write times. Most contemporary SSDs are based on NAND flash, which is why it’s the focus of this article.

An enterprise SSD contains multiple NAND flash chips for storing data. Each chip contains one or more dies, and each die contains one or more planes. A plane is divided into blocks, and a block is divided into pages.

Of these, the blocks and pages are the greatest concern, not because you configure or manipulate them directly, but because of how data is written, read, and erased on a NAND chip. Data is read and written at the page level, but erased at the block level, as illustrated in Figure 1.

word image 43 Storage 101: Understanding the NAND Flash Solid State Drive

Figure 1. Writing and erasing data in a NAND flash SSD (image by Dmitry Nosachev, licensed under Creative Commons Attribution-Share Alike 4.0 International)

In this case, each page is 4 kibibytes (KiB) and each block is 256 KiB, which equals 64 pages per block. (A kibibyte is 1024 bytes. Kibibytes are sometimes used instead of kilobytes because they’re more precise. A kilobyte can equal 1000 bytes or 1024 bytes, depending on its usage.) Each time the SSD reads or writes data, it does so in 4-KiB chunks, but each time the drive erases data, it carries out a 256-KiB operation. This write/erase difference has serious consequences when updating data, as you’ll see later in the article.

Inside the NAND Cell

A page is made up of multiple cells that each hold one or more data bits. A data bit is represented by an electrical charge state, which is determined by the electrons trapped between insulator layers within the cell. Each bit is registered as either charged (0) or not charged (1), providing the binary formula needed to represent the data.

Today’s NAND flash chips use either floating gate cells or charge trap cells. Until recently most NAND flash relied on floating gate technologies, in which the electrons are trapped between two oxide layers in a region called the floating gate. The bottom oxide layer is thin enough for electrons to pass through when voltage is applied to the underlying substrate. Electrons move into the floating gate during a write operation and out of the floating gate during an erase operation.

The challenge with the floating gate approach is that each time voltage is applied and electrons pass through the oxide layer, the layer degrades slightly. The more write and erase operations, the greater the degradation, until eventually the cell might no longer be viable.

Bear in mind, however, that SSD technologies have come a long way, making them more reliable durable, while being able to deliver greater performance and store more data. At the same time, they keep coming down in price, making them much more competitive in price.

Vendors continue to explore new technologies to continue to improve SSDs. For example, several vendors are now turning to charge trap technologies in their NAND cells. Charge trap cells are similar to floating gate cells except that they use different insulator materials and methodologies to trap the electrons, resulting in cells that are less susceptible to wear. That said, charge trap technologies come with their own reliability issues, so neither approach is ideal.

There is, of course, much more to floating gate and charge gate technologies, but this should give you some idea of what’s going on, in the event you come across these terms. But know too that gate technologies are only part of the equation when it comes to understanding the NAND cell structure.

In fact, the bigger concern when evaluating SSDs is the number of bits stored in each cell. Today’s SSDs accept between one and four bits per cell, with a correlated number of charge states per cell, as shown in the following table. Note that vendors are also working on five-bit cell flash—dubbed penta-level cell (PLC)—but the jury is still out on this technology.

Cell type

# of data bits

# of charge states

Possible binary values per cell

Single-level cell (SLC)

1

2

0, 1

Multi-level cell (MLC)

2

4

00, 01, 10, 11

Triple-level cell (TLC)

3

8

000, 001, 010, 011, 100, 101, 110, 111

Quad-level cell (QLC)

4

16

0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111

As the table shows, the more bits per cell, the greater the number of available charge states per cell, and the more charge states, the greater the number of available binary values, which translates to greater density. Not only does this mean packing in more data on each chip, but it also means more affordable drives.

Unfortunately, when you start squeezing more bits into each cell, performance suffers and the cells wear out faster. A QLC drive might hold more data and be a lot cheaper, but an SLC drive will deliver the best performance and last the longest, although at a greater price.

In addition to squeezing in more bits per cell, vendors have also been shrinking cells to fit more of them on each chip. Although this increases data density, this can lead to electrical charges leaking from one cell to another, so additional techniques must be leveraged to avoid data corruption and preserve data integrity. But vendors have an answer for this as well: 3-D NAND.

In the 3-D approach, vendors back off from shrinking cells and instead stack cells on top of each other in layers, creating chips that can hold significantly more data. When combined with multi-bit technologies such as MLC or TLC, 3D NAND makes it possible to increase chip densities beyond anything before possible, without sacrificing data integrity. For example, Samsung’s sixth-generation V-NAND chip combines 3-D and TLC technologies to store up to 256 Gb of data across 136 layers of cells. (For more information about performance-related concepts, refer to the second article in this series.)

The SSD Components

NAND chips are at the heart of the SSD, carrying out the drive’s main function of storing data. But an SSD also includes several other important components which work together to facilitate the read, write, and erase operations.

Figure 2 shows an HGST Ultrastar SSD that holds 1.6 TB of data. Although the NAND chips are covered by a label, you can see that the circuit board is filled with a variety of other components.

word image 44 Storage 101: Understanding the NAND Flash Solid State Drive

Figure 2. HGST Ultrastar SN150 Series NVMe/PCIe solid-state drive (photo by Dmitry Nosachev, licensed under Creative Commons Attribution-Share Alike 4.0 International)

To the right of the connector pins, the device hosts five Micron DRAM chips, neatly arrayed from bottom to top. The chips serve as a cache for improving write operations and maintaining system data. Unlike the NAND chips, the cache is volatile (non-persistent) and used only as a temporary buffer. In other words, although the buffered data won’t survive a loss of power, the drive will deliver better performance when it’s running.

The HGST drive has a 2.5-inch form factor and provides a Peripheral Component Internet Express (PCIe) interface. It also supports the Non-Volatile Memory Express (NVMe) protocol for maximizing the benefits of the PCIe interface. (For more information about form factors, interfaces, and protocols, refer to the first article in this series.)

You can see the PCIe interface connector in Figure 1, jutting out from the side of the circuit board. You can also see it in Figure 2, which shows the front side of the HGST drive, covered mostly by the heat sink.

word image 45 Storage 101: Understanding the NAND Flash Solid State Drive

Figure 3. HGST Ultrastar SN150 Series NVMe/PCIe solid-state drive (photo by Dmitry Nosachev, licensed under Creative Commons Attribution-Share Alike 4.0 International)

SSD configurations can vary considerably from one to the next, so don’t assume that others will look like the HGST drive. I picked this one because it provides a good example of a NAND flash SSD.

Despite the differences between SSDs, they all include NAND chips, conform to specific form factors and interface standards, and typically provide some type of cache to serve as a memory buffer. (All enterprise storage devices—HDD and SSD—provide built-in volatile cache.) An SSD also includes a controller for managing drive operations and firmware for providing the controller with the instruction sets necessary to carry out those operations.

Reading and Writing Data

As noted earlier, reading and writing data occur at the page level. Reading data is a fairly straightforward operation. When the drive receives a request for data, the controller locates the correct cells, determines the charge states, and ensures that the data is properly returned, using buffer memory as necessary. The entire process has little long-term impact on the drive itself.

Writing data is a programming operation that sets the data bits to the desired charge state, a process orchestrated by the controller. – Writing data to a page for the first time is nearly as straightforward as reading data. The process grows more complex when modifying that data, which requires that it first be erased and then rewritten, a process commonly referred to as a program/erase cycle (P/E cycle).

During a typical P/E cycle, the entire block containing the targeted pages is written to memory. The block is then marked for deletion and the updated data rewritten to another block. The actual erase operation occurs asynchronously in order to optimize performance.

The controller coordinates the erase and write processes, using advanced data management algorithms. Even if only a single change on a single page needs to be recorded, an entire P/E cycle is launched. The block is marked for deletion and all its data rewritten.

The controller erases the block when it’s needed or as part of an optimization process. When erasing the block, the controller sets every bit in every cell to 1. After that, data can be written to any page in the block. However, if any bits in a page are set to 0—even if only one—the entire page is off-limits to writing data.

As an SSD starts filling up, the writing and rewriting operations become more complex and start to slow down. The controller must find places to store the data, which can involve erasing blocks marked for deletion, moving and consolidating data, or performing multiple P/E cycles. The fuller the drive, the more extensive these operations, which is why performance can start to degrade as a drive reaches capacity.

Because of the many P/E cycles, more data is routinely written to the drive than the amount being modified, a characteristic commonly called write amplification. For example, updating a simple 25-KB text file might result in 250 KB of data being written, causing additional wear on the cells.

A flash SSD can support only a limited number of P/E cycles before it fails. The more bits squeezed into each cell, the fewer that number and the faster the time to failure. For example, an MLC drive might support up to 6,000 P/E cycles per block, but a TLC drive might max out at 3,000.

As P/E cycles start adding up, cells start failing. For this reason, SSDs employ several strategies to extend a drive’s lifespan, assure reliability, and maintain data integrity, including:

  • Wear leveling: A controller-based operation for distributing P/E cycles evenly across the NAND chips to prevent any cells from premature failure.
  • TRIM command: An operating system command for consolidating a drive’s free space and erasing blocks marked for deletion, which can improve performance and minimize write application.
  • Over-provisioning: Extra drive space reserved for management processes such as wear leveling and for reducing the extra write amplification that occurs when a drive gets too full.
  • Caching: A process of storing data in memory to boost performance and, when used effectively, minimize P/E cycles.
  • Error-correction code (ECC): A process for checking data for errors and then, if necessary, correcting those errors.

An SSD might also incorporate strategies for improving performance. For example, flash drives implement garbage collection, a background process for moving, consolidating, and erasing data. There’s some debate about whether garbage collection adds write amplification or reduces it. It depends on how the garbage collection operations are implemented and the quality of the algorithms used to carry out these operations.

SSD firmware updates might also address performance, reliability, and integrity issues, along with other types of issues. Whenever you install a new SSD, one of the first steps you should take is to ensure that you’re running the latest firmware. These are not necessarily the only tactics that a drive will employ, but they represent some of the most common.

Much More to the SSD

The information covered here should give you a sense of how NAND flash SSDs work. At the same time, you no doubt also realize that SSDs are extremely complex devices and that what I’ve touched upon barely scratches the surface. Even so, you should now have a more solid foundation for moving forward.

Keep in mind, however, that memory technologies are quickly evolving, with some redefining how we think of memory and storage. For example, an Intel Optane drive can store data like NAND but operate nearly as fast as DRAM, bridging the gap between traditional storage and traditional memory. Whether Optane or a similar technology will replace flash is yet to be seen, but no doubt something will, at which point we’ll be having a very different discussion.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Washington Privacy Act fails again, but state legislature passes facial recognition regulation

March 13, 2020   Big Data

For the second year running, lawmakers in the state of Washington have failed to pass sweeping data privacy legislation. The Washington Privacy Act, or SB 6281 — akin to Europe’s GPDR or California’s CCPA — would have allowed individuals to request that companies delete their data. But today Washington state House and Senate lawmakers did succeed in passing SB 6280, which addresses public and private facial recognition use. The bill requires facial recognition training and bias testing and mandates that local and state government agencies disclose use of facial recognition. It also creates a task force to consider recommendations and discrimination against vulnerable communities.

The news comes a day before the state’s legislative session closes.

Senator Reuven Carlyle (D-WA) said in a statement today that a conference of lawmakers had failed to reconcile differences over whether the state attorney general or consumers in courts should have the power to enforce the law.

“Following two historic, near-unanimous votes on proposals in the Senate this year and last, I’m deeply disappointed that we weren’t able to reach consensus with our colleagues in the House,” Carlyle said. “The impasse remains a question of enforcement. As a tech entrepreneur who has worked in multiple startup companies, and in the absence of any compelling data suggesting otherwise, I continue to believe that strong attorney general enforcement to identify patterns of abuse among companies and industries is the most responsible policy and a more effective model than the House proposal to allow direct individual legal action against companies.”

Privacy regulation cosponsored by Carlyle last year failed in the Senate over disagreements.

 Washington Privacy Act fails again, but state legislature passes facial recognition regulation

The Washington Privacy Act (SB 6281) passed with a 46-1 Senate vote in February and then passed the Washington state House of Representatives in a 63-33 vote on March 6 with a range of amendments. The state Senate’s rejection of the amended law triggered a concurrence meeting with members from both parties.

A 2019 version of the Washington Privacy Act also made its way through the Senate in a 46-1 vote but died in the House of Representatives.

Representative Zack Hudgins (D-WA) was also part of the concurrence meeting and said lawmakers were unable to overcome disagreements around consumer-focused enforcement of privacy laws.

“Strong attorney general enforcement was never the issue; it was the role of consumers that proved impossible to reconcile. These issues of privacy, and the data economy, are not going to fade in Washington state or nationally,” Hudgins said in a statement.

Washington lawmakers are considering a range of bills related to biometric data privacy and facial recognition regulation.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

From Washington state to Washington DC, lawmakers rush to regulate facial recognition

January 19, 2020   Big Data
 From Washington state to Washington DC, lawmakers rush to regulate facial recognition

Amid the start of an impeachment trial; talk of mounting hostility with Iran; new trade deals with China, Canada, and Mexico; and the final presidential debate before the start of the Democratic presidential primary season, you might’ve missed it, but it was also a momentous week for facial recognition regulation.

A bipartisan group in Congress wants action, roughly a dozen state governments are considering legislation, and overseas news broke Thursday that the European Commission is considering a five-year moratorium on facial recognition among potential next steps. This would make the EU the largest government worldwide to halt deployment of the technology.

In Washington, DC this week, the House Oversight and Reform Committee pledged to introduce legislation in the “very near future” that could regulate facial recognition use by law enforcement agencies in the US. Just like in hearings held last summer, members of Congress exhibited a fairly unified, bipartisan position that facial recognition use by the government should be regulated and in some cases limited. There was talk of regulation, but until this week, the future of sweeping facial recognition regulation seemed uncertain.

Congress on Civil Rights, the Constitution, and facial recognition

Lawmakers seem to have a sense of urgency to take action for a variety of reasons, including a lack of standards for businesses; governments; and local, state, and federal law enforcement.

One major area of focus: violation of the 1st amendment right to freedom of assembly, and the idea that facial recognition might be used to identify people at political rallies or track political dissidents at protests.

“It doesn’t matter if it’s a President Trump rally or a Bernie Sanders rally, the idea of American citizens being tracked and cataloged for merely showing their faces in public is deeply troubling,” Jordan said.

“The urgent issue we must tackle is reigning in the government’s unchecked use of this technology when it impairs our freedoms and our liberties. Our late chairman Elijah Cummings became concerned about government use of facial recognition technology after learning it was used to surveil protests in his district related to Freddie Gray. He saw this as a deeply inappropriate encroachment on the freedom of speech and association, and I couldn’t agree more,” Jordan said.

Another reason lawmakers are anxious to regulate facial recognition: Civil rights protections and the great potential for racial discrimination.

Analysis by the Department of Commerce’s National Institute for Standards and Technology (NIST) last month found that some facial recognition systems are anywhere from 10 to 100 times more likely to misidentify groups like the young, elderly, women of color, and people of Asian or African descent.

Facial recognition systems that exhibit discriminatory performance, lawmakers contend, can exacerbate existing prejudices and overpolicing of schools and communities of color.

NIST’s analysis follows studies in 2018 and 2019 by AI researchers that found misidentification issues for popular facial recognition systems like Amazon’s Rekognition. Amazon has not agreed to have its AI analyzed by NIST, director Dr. Charles Romine told Congress this week.

Romine said talks between NIST and Amazon are ongoing on the subject of Rekognition review by the federal government.

If any one company that seemed to be a main source of ire for the committee, it’s Amazon.

Amazon lobbied members of Congress on the subject and has stated a willingness to sell its facial recognition to any government agency. Amazon reportedly marketed Rekognition to ICE officials, but the extent to which facial recognition is sold to government agencies is still unknown. In a shareholder vote last summer, Amazon chose to continue to sell facial recognition services to governments.

One factor that lawmakers say motivates a sense of urgency: China. In the EU, Washington DC, and on the state and local level across the U.S., lawmakers frequently cite China’s use of facial recognition to strengthen an authoritarian state as a future they want to avoid.

Meredith Whittaker is cofounder of the AI Now Institute cofounder and former Google employee. In testimony earlier this week she talked about how facial recognition is often used by those in power to monitor those without power and described the difference between usage in the U.S. and China. Last month, the AI Now Institute called for a ban of business and government use of facial recognition technology.

“I think it is a model for authoritarian social control that is backstopped by extraordinarily powerful technology,” Whittaker said about China’s use of facial recognition software. “I think one of the differences between China and the U.S. is that there, the technology is announced as state policy. In the U.S., this is primarily corporate technology that is being secretly threaded through our core infrastructures without that kind of acknowledgment.”

Washington state’s impact on facial recognition regulation

A handful of cities put facial recognition bans and moratoriums in place in 2019, but state legislatures in 2020 are already moving even faster to regulate the technology. Since the start of the years, 10 state legislatures introduced bills to regulate the use of facial recognition software, according to the Georgetown University Law School Center on Privacy and Technology.

In the state of Washington, the stakes may be unlike anywhere else in the world. Legislation to regulate use of facial recognition in Washington can be particularly influential, since the Seattle area is home to Amazon and Microsoft, two of the largest companies selling facial recognition software to governments. Axon, maker of police body cameras and video cloud storage provider, is also in Washington.

In Washington, state lawmakers this week started a second attempt to pass the Washington State Privacy Act. Known as SB 6281, the bill would regulate data privacy law and require “meaningful human review” of facial recognition results when used by the private sector. In a press conference Monday, the bill’s chief sponsor, Sen. Reuven Carlyle, said Washington is moving forward because there isn’t time to wait for lawmakers in Washington DC to deliver privacy regulation to reign in business use of private data. Carlyle said the bill takes cues from CCPA in California and GDPR in Europe.

A different version of the Washington Privacy Act passed the Washington State Senate with a near unanimous vote in spring 2019 but died in the Washington State House of Representatives.

Lawmakers complained last spring that the legislative process was tainted by lobbying from tech companies like Microsoft, and that companies like Amazon and Microsoft played too much of a role in drafting the 2019 version of the Washington Privacy Act.

Also introduced this week in Washington is SB 6280, a bill to regulate government use of facial recognition. The bill’s chief sponsor, State Senator Joe Nguyen, is a senior program manager at Microsoft, according to his LinkedIn profile. Nguyen is also a cosponsor of the Washington Privacy Act.

Microsoft initially supported the Washington Privacy Act last year but came to oppose amendments to the bill, calling them too restrictive. Microsoft also opposed a moratorium proposed by the ACLU, one of the first such moratoriums to be considered by any state legislature.

Jevan Hutson leads facial recognition and AI policy at the University of Washington School of Tech and Public Policy Clinic. He testified in multiple hearings in Olympia, Washington this week in favor of HB 2363, a bill that would make biometric data the sole property of an individual, and in opposition to the latest iteration of the Washington Privacy Act.

He also introduced a bill known as the AI Profiling Act. The legislation, which he drafted with others at the University of Washington, would outlaw the use of AI to profile people in public places; in important decision-making processes for a number of industries; and to predict a person’s religious affiliation, political affiliation, immigration status, or employability.

His position is that facial recognition may have some legitimate use cases, but it’s also a perfect surveillance tool, and that the root cause motivating facial recognition supporters is to create a new, invasive surveillance capitalism-driven marketplace.

Like last year, he believes the permissive regulatory framework found in the Washington Privacy Act that rejects the idea of a moratorium comes about due to the outsized influence of technology companies in Washington State that stand to profit from the widespread deployment of facial recognition.

He views Microsoft’s involvement and lobbying in 2019, and again in 2020, as an effort to create an initial framework of what facial recognition regulation should look like so they can bring that model to other states and Washington DC.

While speaking at Seattle University last year, Microsoft president Brad Smith said legislation passed in Washington could go on to shape facial recognition policy around the world.

As lawmakers in favor of the bill lay the necessary groundwork to attempt to pass the bill for a second time, politicians and advocates like Hutson argue legislation should take into account the demonstrated harm facial recognition can do and reject the idea that widespread use of facial recognition is inevitable. 

“I think legislators and advocates here are seriously concerned and recognize that we need to get out front,” he said. “I think that sort of gets to the question of why now; it’s so important that we act because things will be bought by governments, and businesses will begin to deploy these things if there is not a clear sign from regulators and legislators both at the federal and local level to say, ‘No, this is not a valid market given the dangers that it poses.’” 

Hutson also calls action in the near future important in order to shut down the idea that stifling innovation, a common argument against regulation, is always a bad thing. Facial recognition is being used for payments and to arrest people accused of crimes in China, but it’s also being used to track or imprison ethnic minorities, a use case he says could also be considered innovative.

“Innovation in many ways is this sort of false religion right where it’s like innovation in and of itself is a perfect good, and it’s not,” he said. “This is innovation worth stifling. Like I don’t think we should be super innovative with nuclear weapons. We don’t need even more innovative forms of oppression to be legitimized and authorized by the state legislature.”

Finishing thoughts

As laws get hammered out, stories of outrage continue.

In recent days, in Denver, where the city council is considering a facial recognition ban, advocates demonstrated that all nine members of the council met a 92% accuracy rate as people on the local sex offender registry.

There’s also the story Clearview AI, a startup that allows people to upload an image of a person then find where else on the web that person’s images appeared.

“It’s creepy what they’re doing, but there will be many more of these companies. Absent a very strong federal privacy bill, we’re all screwed,” Stanford University professor Al Gidari told the New York Times.

Steps taken this week to introduce legislation are just the beginning.

Bills and regulations continue to percolate through state legislatures and the halls of Congress, and as they do, the string of stories that refresh outage and brought about the initial sense of urgency seem likely to continue.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Dan Bongino: More Troubling Deep State Connections Emerge

January 17, 2020   Humor
0 Dan Bongino: More Troubling Deep State Connections Emerge

The Dan Bongino Show
The Bongino Report

Let’s block ads! (Why?)

ANTZ-IN-PANTZ ……

Read More

The State of Trumpworld

January 12, 2020   Humor

From Twitter: A protestor confronts Senator Lindsey Graham and asks him “How will your children survive extinction with Donald Trump in office?”

Graham replies, “That’s easy, I don’t have any.”


Frank Luntz, the Republican pollster, is chatting with Donald Trump at the White House Christmas party. Luntz asks Trump what his middle initial “J” stands for.

Trump responds, “Genius,”

Related

 If you liked this, you might also like these related posts:
  1. Swing State Stunner
  2. Serious State Department Breach!
  3. Red State = Red Light State?
  4. The Privatized State of America
  5. Police State We Can Believe In

Let’s block ads! (Why?)

Political Irony

Read More

Carnegie Mellon and Oregon State team wins first leg of DARPA Subterranean Challenge robot competition

August 22, 2019   Big Data
 Carnegie Mellon and Oregon State team wins first leg of DARPA Subterranean Challenge robot competition

The U.S. Defense Advanced Research Projects Agency (DARPA) kicked off the Subterranean Challenge in December 2017, with the goal of equipping future warfighters and first responders with tools to rapidly map, navigate, and search hazardous underground environments. The final winner of the four-event competition won’t be selected until 2021, but Team Explorer from Carnegie Mellon University and Oregon State University managed to best rivals for the initial prize.

On four occasions during the eight-day Tunnel Circuit event, which concluded today, each team deployed multiple robots into National Institute for Occupational Safety and Health research mines in South Park Township, Pennsylvania, tasked with autonomously navigating mud and water and communicating with each other and a base station for an hour at a time as they searched for objects. Team Explorer’s roughly 30 university faculty, students, and staff members leveraged two ground robots and two drones to find 25 artifacts in its two best runs (14 more than any other team), managing to identify and locate a backpack within 20 centimeters of its actual position.

“Mobility was a big advantage for us,” said team co-leader Sebastian Scherer, associate research professor in Carnegie Mellon’s Robotics Institute, in a statement. “The testing [prior to the event, at Tour-Ed Mine in Tarentum, Pennsylvania] was brutal at the end, but it paid off in the end. We were prepared for this … We had big wheels and lots of power, and autonomy that just wouldn’t quit.”

Team Explorer — which has the backing of the Richard King Mellon Foundation, Schlumberger, Microsoft, Boeing, Flir Systems, Near Earth Autonomy, Epson, Lord, and Doodle Labs — is one of 11 teams competing for a portion of the Subterranean Challenge’s combined $ 4.5 million prize pool. Future events will involve an Urban Circuit, where robots will explore complex underground facilities, and a Cave Circuit, where the robots will operate in natural caves.

“All the teams worked very hard to get here, and each took a slightly different approach to the problem,” said Team Explorer co-leader Matt Travers, a system scientist at CMU’s Robotics Institute. “This was a great experience for all of us and we are proud of the performance by our team members and our robots.”

The Subterranean Challenge also includes the Virtual track, in which DARPA-funded and self-funded teams are developing software using models of systems, environments, and terrain to compete in simulation-based events and explore larger-scale runs. The winner will earn up to $ 1.5 million in the final event, with additional prizes of up to $ 500,000 for self-funded teams in each of four Virtual Circuit events.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • Why the open banking movement is gaining momentum (VB Live)
    • OUR MAGNIFICENT UNIVERSE
    • What to Avoid When Creating an Intranet
    • Is Your Business Ready for the New Generation of Analytics?
    • Contest for control over the semantic layer for analytics begins in earnest
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited