• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Global

Building AI for the Global South

March 7, 2021   Big Data

The power of audio

From podcasts to Clubhouse, branded audio is more important than ever. Learn how brands are increasing customer loyalty and personalization with these best practices.

Register Now


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Harm wrought by AI tends to fall most heavily on marginalized communities. In the United States, algorithmic harm may lead to the false arrest of Black men, disproportionately reject female job candidates, or target people who identify as queer. In India, those impacts can further impact marginalized populations like Muslim minority groups or people oppressed by the caste system. And algorithmic fairness frameworks developed in the West may not transfer directly to people in India or other countries in the Global South, where algorithmic fairness requires understanding of local social structures and power dynamics and a legacy of colonialism.

That’s the argument behind “De-centering Algorithmic Power: Towards Algorithmic Fairness in India,” a paper accepted for publication at the Fairness, Accountability, and Transparency (FAccT) conference, which begins this week. Other works that seek to move beyond a Western-centric focus include Shinto or Buddhism-based frameworks for AI design and an approach to AI governance based on the African philosophy of Ubuntu.

“As AI becomes global, algorithmic fairness naturally follows. Context matters. We must take care to not copy-paste the Western normative fairness everywhere,” the paper reads. “The considerations we identified are certainly not limited to India; likewise, we call for inclusively evolving global approaches to Fair-ML.”

The paper’s coauthors concluded that conventional measurements of algorithm fairness make assumptions based on Western institutions and infrastructures after they conducted 36 interviews with researchers, activists, and lawyers working with marginalized Indian communities. Among the five coauthors, three are Indian and two are white, according to the paper.

Google research scientist Nithya Sambasivan, who previously worked to create a phone broadcasting system for sex workers in India, is the lead author. She’s also head of human-computer interaction at Google Research India. Coauthors include Ethical AI team researchers Ben Hutchinson and Vinodkumar Prabhakaran. Hutchinson and Prabhakaran were listed as coauthors of a paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” that was also accepted for publication at FAccT this year, but the version submitted to FAccT does not include their names. That paper was the subject of debate at the time former Google AI ethics co-lead Timnit Gebru was fired and concludes that extremely large language models harm marginalized communities by perpetuating stereotypes and biases. Organizers of the conference told VentureBeat this week that FAccT has suspended its sponsorship relationship with Google.

The paper about India identifies factors commonly associated with algorithmic harm in the country, including models being overfit to digitally rich profiles, which usually means middle class men, and a lack of ways to interrogate AI.

As a major step toward progress, the coauthors point to the AI Observatory, a project to document harm from automation in India that launched last year with support from the Mozilla Foundation. The paper also calls for reporters to go beyond business reporting and ask tech companies tough questions, stating, “Technology journalism is a keystone of equitable automation and needs to be fostered for AI.”

“While algorithmic fairness keeps AI within ethical and legal boundaries in the West, there is a real danger that naïve generalization of fairness will fail to keep AI deployments in check in the non-West,” the paper reads. “We must take pains not to develop a general theory of algorithmic fairness based on the study of Western populations.”

The paper is part of a recent surge in efforts to build AI that works for the Global South.

A 2019 paper about designing AI for the Global South describes the term “Global South” as similar to the term “third world,” with a shared history of colonialism and development goals. Global South does not mean simply the southern Hemisphere, as northern hemisphere nations like China, India, and Mexico are generally included, while Australia is in the southern hemisphere but is considered part of the Global North. China seems to be set aside since its AI ambitions and results instill fear in politicians in Washington, D.C. and executives in Big Tech alike.

“The broad concern is clear enough: If privileged white men are designing the technology and the business models for AI, how will they design for the South?” the 2019 paper reads. “The answer is that they will design in a manner that is at best an uneasy fit, and at worst amplifies existing systemic harm and oppression to horrifying proportions.”

Another paper accepted for publication at FAccT this week and covered by VentureBeat examines common hindrances to data sharing in Africa. Written primarily by AI researchers who grew up or live in Africa, the paper urges relationships to data that build trust and consider historical context, as well as current trends of Big Tech companies growing operations in Africa. Like the Google paper, that work draws conclusions from interviews with local experts.

“In recent years, the African continent as a whole has been considered a frontier opportunity for building data collection infrastructures. The enthusiasm around data sharing, and especially in machine learning or data science for development/social good settings, has ranged from tempered discussions around new research avenues to proclamations that ‘the AI invasion is coming to Africa (and it’s a good thing).’ In this work, we echo previous discussions that this can lead to data colonialism and significant, irreparable harm to communities.”

The African data industry is expected to see steady growth in the coming years. Companies like Amazon’s AWS and Microsoft’s Azure opened their first datacenters in Africa in 2019 and 2020, respectively. Such trends have led to examination of data practices around the world, including in the Global South.

Last year, MIT hosted a three-day summit in Boston to discuss AI from a Latin American perspective. The winner of a pitch competition at that event was a predictive model for attrition rates in higher education in Mexico.

 Building AI for the Global South

Above: The 2020 Global AI Readiness Index comparing preparedness and capacity across 33 different metrics

Image Credit: Oxford Insights

As part of the summit, Latinx in AI founder Laura Montoya gave a presentation about the Global AI Readiness (GAIR) score of Caribbean and Latin American countries, alongside factors like unemployment rates, education levels, and the cost of hiring AI researchers.

The inaugural Government AI Readiness Index ranked Mexico highest among Latin American nations, followed by Uruguay and Colombia. Readiness rankings were based on around a dozen factors, including skills, education levels, and governance. Cuba ranked last in the region. When coauthors introduced GAIR in 2019, they questioned whether the Global South would be left out of the fourth industrial revolution. That concern was echoed in the 2020 report.

“If inequality in government AI readiness translates into inequality in AI implementation, this could entrench economic inequality and leave billions of citizens across the Global South with worse quality public services,” authors of the report said.

In the 2020 GAIR, Uruguay inched ahead of Mexico. At #42 in the world, Uruguay is the highest-ranking country in Latin America. Top 50 nations in the AI readiness index are almost entirely in the Global North. And authors of the report stress that having the capabilities to advance isn’t the same thing as successful implementation.

Montoya insists that Caribbean and Latin American nations must consider factors like unemployment rates and warns that brain drain can also be a significant factor and lead to a lack of mentors for future generations.

“Overall, Latin American and Caribbean do have fairly high education levels, and specifically they actually develop more academic researchers in the area of AI than other regions globally, which is of note, but oftentimes those researchers with high technological skills will leave their country of origin in order to seek out potential job opportunities or resources that are not available in their country of origin,” she said.

Leda Basombrio is the data science lead at a center of excellence established by Banco de Credito del Peru. Speaking as part of a panel on the importance of working with industry, she described the difficulty of trying to recruit Latinx AI talent away from Big Tech companies like Facebook or Google in the United States. The majority of AI Ph.D. graduates in the U.S. today are born outside the United States, and about four out of five stay in the U.S. after graduation.

And solutions built elsewhere don’t simply transfer without consideration of local context and culture, she said. Americans or Europeans are likely unfamiliar with the financial realities in Peru, like microfinance loans or informal economic activity.

“The only people that are capable of solving and addressing [problems] using AI as a tool or not are ourselves. So we have to start giving ourselves more credit and start working on those fields because if we expect resolutions will come from abroad, nothing will happen, and I see that we do have the talent, experience, everything we can get,” she said.

AI policy: Global North vs. the Global South

Diplomats and national government leaders have met on several occasions to discuss AI investment and deployment strategies in recent years, but those efforts have almost exclusively involved Global North nations.

In 2019, OECD member nations and others agreed to a set of principles in favor of the “responsible stewardship of trustworthy AI.” More than 40 nations signed the agreement, but only five were from the Global South.

Later that year, the G20 adopted AI principles based on the OECD principles calling for human-centered AI and the need for international cooperation and national policy to ensure trustworthy AI. But that organization only includes six Global South nations: Brazil, India, Indonesia, Mexico, South Africa, and Turkey.

The Global Partnership on AI (GPAI) was formed last year in part to counter authoritarian governments’ efforts to implement surveillance tech and China’s AI ambitions. The body of 15 nations includes the U.S., but Brazil, India, and Mexico are its only members from the Global South.

Last year, the United States Department of Defense brought together a group of allies to consider artificial intelligence applications in the military, but that was primarily limited to U.S. allies from Europe and East Asia. No nations from Africa or South America participated.

Part of the lack of Global South participation in such efforts may have to do with the fact that several countries still lack national AI strategies. In 2017, Canada became the first country in the world to form a national AI strategy, followed by nations in western Europe and the U.S. An analysis released this week found national AI strategies are under development in parts of South America, like Argentina and Brazil, and parts of Africa, including Ethiopia and Tunisia.

 Building AI for the Global South

Above: A global map of national AI policy initiatives according to the 2021 AI Index at Stanford University

An analysis published in late 2020 found a growing gap or “compute divide” between businesses and universities with the compute and data resources for deep learning and those without. In an interview with VentureBeat earlier this year about an OECD project to help nations understand their compute needs, Nvidia VP of worldwide AI initiatives Keith Strier said he expects a similar gap to form between nations.

“There’s a clear haves and have-nots that’s evolving, and it’s a global compute divide. And this is going to make it very hard for tier two countries in Africa, in Latin America and Southeast Asia and Central Europe. [I] mean that the gap in their prosperity level is going to really accelerate their inability to support research, support AI startups, keep young people with innovative ideas in these fields in their country. They’re all going to flock to big capitals — brain drain,” Strier said.

The OECD AI Policy Observatory maintains a database of national AI policies and is helping nations put ethical principles into practice. OECD AI Policy Observatory administrator Karine Perset told VentureBeat in January that some form of AI strategy is underway in nearly 90 nations, including Kenya and others in the Global South.

There are other encouraging signs of progress in AI in Africa.

The machine learning tutorial project Fast.ai found high growth in cities like Lagos, Nigeria in 2019, the same year the African Union formed an AI working group to tackle common challenges, and GitHub ranked a number of African and Global South nations in percentage in growth in contribution to open source repositories. In education, the African Master’s in Machine Intelligence was established in 2018 with support from Facebook, Google, the African Institute for Mathematical Sciences, and prominent Western AI researchers from industry and academia.

The Deep Learning Indaba conference has flourished in Africa, but AI research conferences are generally held in North America and Europe. The International Conference on Learning Representations (ICLR) was scheduled to take place in Ethiopia in 2020 and would have been the first major machine learning conference in Africa, but it was scrapped due to the COVID-19 pandemic.

The AI Index released earlier this week found that Brazil, India, and South Africa have some of the highest levels of hiring in AI around the world, according to LinkedIn data.

Analysis included in that report finds that attendance at major AI research conferences roughly doubled in 2020. COVID-19 forced major AI conferences to move online, which led to greater access worldwide. AI researchers from Africa have faced challenges when attempting to reach conferences like NeurIPS on numerous occasions in the past. Difficulty faced by researchers from parts of Africa, Asia, and Eastern Europe led the Partnership on AI to suggest that more governments create visas for AI researchers to attend conferences, akin to the visas some nations have for athletes, doctors, and entrepreneurs.

Make a lexicon for AI in the Global South

Data & Society has launched a project to map AI in the Global South. Ranjit Singh, a member of the AI on the Ground team at Data & Society, in late January launched a project for mapping AI in the Global South over the course of the year. As part of that project, he will collaborate with members of the AI community, including AI Now Institute, which is working to build a lexicon around conversations about AI for the Global South.

“The story of how public-private partnerships are imagined in the context of AI, especially in the Global South, and the nature of these relationships that are emerging, I find that to be quite a fascinating part of this study,” Singh said.

Singh said he focuses on conversations about AI in the Global South because identifying key words can help people understand critical issues and provide information needed for governance, policy, and regulation.

“So I want to basically move from what the conversation and keywords that scholarly research, as well as practitioners in the space, talk about and use to then start thinking about, ‘OK, if this is the vocabulary of how things work, or how people talk about these things, then how do we start thinking about governance of AI?’” he said.

A paper published at FAccT and coauthored by Singh and the Data & Society AI on the Ground team considers how environmental, financial, and human rights impact assessments are used to measure commonalities and quantify impact.

Global South AI use cases

Rida Qadri is a Ph.D. candidate who grew up in Pakistan and now studies urban information systems and the Global South at MIT. Papers about data and AI in India and Africa published at FAccT emphasize that the narrative around AI in the Global South often panders to specific ethics topics and communities influenced by legacies of colonialism. Qadri agrees with this assessment.

“They’re thinking about those kinds of ethical concerns that now Silicon Valley is being critiqued for. But what’s interesting is they position themselves as homegrown startups that are solving developing world problems. And because the founders are all from the developing world, they automatically get a lot of legitimacy. But the language that they’re speaking is just directly what Silicon Valley would be speaking — with some sort of ICT for development stuff thrown in, like empowering the poor, like educating farmers. You have ethics washing in the Global North, and in the developing world we have development washing or empowerment speak, like poverty porn,” she said.

Qadri also sees ways AI can improve lives and says that building innovative AI for the Global South could help solve problems that plague businesses and governments around the world, particularly when it comes to working in lean or resource-strapped environments.

Trends she’s watching around AI in the Global South include security and surveillance, census and population counts using satellite imagery, and predictions of poverty and socio-economics.

There are also numerous efforts related to creating language models or machine translation. Qadri follows Matnsāz, a predictive keyboard and developer tool for Urdu speakers. There’s also the Masakhane open source project to provide machine translation for thousands of African languages to preserve local languages and enable commerce and communication. That project focuses on working with low-resource languages, those with less text data for machine translation training than languages like English or French.

Final thoughts

Research published at FAccT this week frequently expresses concerns about data colonialism from the Global North. If AI can build what Ruha Benjamin refers to as a new Jim Code in the United States, it seems critically important to consider trends of democratization, or the lack thereof, and how AI is being built in nations with a history of colonialism.

It’s also true that brain drain is a major factor for businesses and governments in the Global South and that a number of international AI coalitions have been formed largely without these nations. Let’s hope those coalitions expand. Doing so could involve reconciling issues between countries largely known for colonization and those that were colonized. And enabling the responsible development and deployment of AI in Global South nations could help combat issues like data colonialism in other parts of the world.

But issues of trust remain a constant across international agreements and papers published at FAccT by researchers from Africa and India this week. Trust is also highlighted in agreements from the OECD and G20.

There can be a temptation to view AI ethics purely as a human rights issue, but the fair and equitable deployment of artificial intelligence is also essential to adoption and business risk management.

Above: Global AI adoption rates according to McKinsey

Jack Clark was formerly director of policy at OpenAI and is part of the steering committee for the AI Index, an annual report on the progress of AI in business, policy, and use case performance. He told VentureBeat earlier this week that the AI industry is industrializing rapidly, but it badly needs benchmarks and ways to test AI systems to move technical performance standards forward. As businesses and governments increase deployments, benchmark challenges can also help AI practitioners measure progress toward shared goals and rally people around common causes — like preventing deforestation.

The idea of common interests comes up in Ranjit Singh’s work at Data & Society. He said his project is motivated by a desire to map and understand the distinct language used to discuss AI in Global South nations, but also to recognize global concerns and encourage people to work together on solutions. These might include attempts to understand when a baby’s cough is deadly, as the startup Ubenwa is doing in Canada and Nigeria; seeking public health insights from search engine activity; and fueling local commerce with machine translation. But whatever the use case, experts in Africa and India stress that equitable and successful implementation depends on involving local communities from the inception.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

SO MUCH FOR GLOBAL WARMING, EH?

January 19, 2021   Humor
blank SO MUCH FOR GLOBAL WARMING, EH?

Melting icebergs to cause an ice age.

Icebergs in the Antarctic gradually melting further and further from the frozen continent could be the trigger that plunges Earth into a new ice age, study finds.

Researchers from Cardiff University reconstructed past climate conditions and identified tiny fragments of Antarctic rock dropped in the open ocean as part of a study designed to understand how ice ages begin.

Ice age cycles over the past 1.6 million years have been paced by periodic changes to Earth’s orbit of the Sun – changing how much solar radiation reaches the surface.

However, before this study little was known about how changes in solar energy from small changes in the orbit could so dramatically change Earth’s climate.

They found that melting icebergs gradually move freshwater from the Southern to the Atlantic Ocean by melting further from Antarctica – causing a change in ocean circulation and plunging the planet into a cold period – triggering an ice age.

The impact of human-created CO2 emissions could make the Southern Ocean too warm for Antarctic icebergs to reach, bringing an end to this 1.6 million year cycle of ice ages starting with melting icebergs, study authors warned.

In their study, the team propose that when the orbit of Earth around the Sun is just right, Antarctic icebergs begin to melt further and further away from Antarctica.

This results in huge volumes of freshwater being shifted away from the Southern Ocean and into the Atlantic Ocean.

As the Southern Ocean gets saltier and the North Atlantic gets fresher, large-scale ocean circulation patterns begin to dramatically change, pulling CO2 out of the atmosphere and reducing the so-called greenhouse effect.

This in turn pushes the Earth into ice age conditions, according to the team, who reconstructed past climate conditions including finding tiny fragments of Antarctic rock dropped in open ocean by melting icebergs.

The rock fragments were obtained from sediments recovered by the International Ocean Discovery Program (IODP) that represents 1.6 million years of history.

The study found that these deposits, known as Ice-Rafted Debris, appeared to consistently lead to changes in deep ocean circulation, reconstructed from the chemistry of tiny deep-sea fossils called foraminifera.

The team also used new climate model simulations to test their hypothesis, finding that huge volumes of freshwater could be moved by the icebergs.

Lead author of the study Aidan Starr, said they were astonished to find that the link between iceberg melting and ocean circulation was present during the onset of every ice age for the past 1.6 million years.

‘Such a leading role for the Southern Ocean and Antarctica in global climate has been speculated but seeing it so clearly in geological evidence was very exciting,’ he said.

Professor Ian Hall, co-author of the study and co-chief scientist of the IODP Expedition, from Cardiff, said the results provide a ‘missing link’ in ice age history.

Let’s block ads! (Why?)

ANTZ-IN-PANTZ ……

Read More

China proposes global data security standards

September 8, 2020   Big Data
 China proposes global data security standards

Automation and Jobs

Read our latest special issue.

Open Now

(Reuters) — China announced an initiative on Tuesday to establish global standards on data security, saying it wanted to promote multilateralism in the area at a time when “individual countries” were “bullying” others and “hunting” companies.

The announcement, by State Councillor Wang Yi, comes a month after the United States said it was purging “untrusted” Chinese apps under a program dubbed “Clean Network”.

China’s initiative calls for technology firms to prevent the creation of so-called backdoors in their products and services that could allow data to be obtained illegally, as well as for participants to respect the sovereignty, jurisdiction and data management rights of other countries.

It also calls for participants to not engage in large-scale surveillance of other countries or illegally acquire information of foreign citizens through information technology.

It did not detail the nature of the initiative or say whether any other country had joined.

“Global data security rules that reflect the wishes of all countries and respect the interests of all parties should be reached on the basis of universal participation by all parties,” Wang said.

“Some individual countries are aggressively pursuing unilateralism, throwing dirty water on other countries under the pretext of ‘cleanliness’, and conducting global hunts on leading companies of other countries under the pretext of security. This is naked bullying and should be opposed and rejected.”

China tightly controls and censors its own cyberspace through the popularly dubbed Great Firewall, which has for years restricted access to firms such as U.S. majors Twitter, Facebook and Google owner Alphabet.

The administration of U.S. President Donald Trump has taken aim at Chinese giants such as Huawei Technologies, Tencent Holdings and TikTok owner ByteDance, citing concerns over national security and the collection of personal data, which the companies have rejected.

It has blocked U.S. exports to Huawei and plans to ban TikTok in the United States this month unless ByteDance sells TikTok’s U.S. operations.

(Reporting by Gabriel Crossley and Ryan Woo in Beijing, Brenda Goh in Shanghai; Editing by Muralikumar Anantharaman and Christopher Cushing)

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

PowerObjects, an HCL Technologies Company, Wins 2020 Global Microsoft Partner of the Year Award… Again!

August 3, 2020   Microsoft Dynamics CRM

We’ve got exciting news to share in today’s blogpost. On July 13, 2020, Microsoft announced their global Partner of the Year winners and finalists. We are pleased to report that we won one award and were a finalist for another! The Microsoft Partner of the Year Awards recognize Microsoft partners that have developed and delivered exceptional Microsoft-based solutions during the past year.

Source

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

My session at Power BI Global Virtual Conference – #PowerQuery

July 10, 2020   Self-Service BI

Last month I had the pleasure of presenting a session at the free Power BI Global Virtual Conference.

The conference was arranged by Avi Singh – PowerBIPro – link to his YouTube Channel here

 My session at Power BI Global Virtual Conference – #PowerQuery

My session was about Power Query and how to make your Power Queries more dynamic.

You can download my presentation and demo files via this link – here.

Let’s block ads! (Why?)

Erik Svensen – Blog about Power BI, Power Apps, Power Query

Read More

Locus Robotics raises $40 million to take its warehouse robots global

June 2, 2020   Big Data

Warehouse robotics startup Locus Robotics today announced it has raised $ 40 million, the bulk of which will be put toward accelerating R&D and the company’s expansion into new markets, including in the EU, where it opened a new headquarters. CEO Rich Faulk says Locus also intends to launch strategic reseller partnerships throughout 2020, following a year in which its number of customer deployments passed 50.

Worker shortages attributable to the pandemic have accelerated the adoption of automation. According to ABI Research, more than 4 million commercial robots will be installed in over 50,000 warehouses around the world by 2025, up from under 4,000 warehouses as of 2018. In China, Oxford Economics anticipates 12.5 million manufacturing jobs will become automated, while in the U.S., McKinsey projects machines will take upwards of 30% of such jobs.

Locus’ autonomous robots — LocusBots — can be reconfigured with virtually any tote, box, bin, or container or with peripherals like barcode scanners, label printers, and environmental sensors designed to expedite order processing. They work collaboratively with human associates, minimizing walking with a UI that recognizes workers’ Bluetooth badges and switches to their preferred language. On the backend, Locus’ LocusServer orchestrates multiple robots such that they learn efficient travel routes, sharing the information with other robots and clustering orders to where workers are. As orders come into warehouse management systems, Locus organizes them before transmitting back confirmations — providing managers real-time performance data, including productivity, robot status, and more.

 Locus Robotics raises $40 million to take its warehouse robots global


VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

When new LocusBots are added to the fleet, they share warehouse inventory status and item locations. Through LocusServer, they detect blockages and other traffic issues to improve item pick rates and order throughput. Locus’ directed picking technology actively directs workers to their next pick location, letting them select their own pace while optionally accepting challenges through a gamification feature that supports individual, team, and shift goals plus events and a mechanism managers can use to provide feedback. In addition, Locus’ backend collates various long-tail metrics, including hourly pick data, daily and monthly pick volume, current robot locations, and robot charging levels.

Locus offers a “robot-as-a-service” program through which customers can scale up by adding robots on a limited-time basis. For a monthly subscription fee, the company sends or receives robots to warehouses upon request, and it provides those robots software and hardware updates, in addition to maintenance.

Locus claims that its system — which takes about four weeks to deploy — has delivered a 2 to 3 times increase in productivity and throughput and 15% less overtime spend for brands that include Boots UK, Verst Logistics, Ceva, DHL, Material Bank, Radial, Port Logistics Group, Marleylilly, and Geodis. The company’s robots passed 100 million units picked in February, and in April, UPS announced that it would be piloting Locus machines in its own facilities.

“COVID-19 has dramatically accelerated trends that have been taking shape over several years in the logistics market, including the movement to collaborative robotics to deal with the labor crisis,” Faulk told VentureBeat via email, adding that the company’s annual recurring revenue increased 300% in 2020 year-over-year. “Our pipeline is expanding weekly with major global brands needing to automate prior to peak season to address the labor gap they will face this year.”

 Locus Robotics raises $40 million to take its warehouse robots global

Zebra Technologies’ Zebra Ventures led this series D investment in Wilmington, Massachusetts-based Locus, with participation from existing backers, including Scale Venture Partners. This round brings the Quiet Logistics spinout’s total raised to over $ 105 million as it looks to expand its workforce from more than 120 people to 200 by 2021.

Locus competes in the $ 3.1 billion intelligent machines market with Los Angeles-based robotics startup InVia, which leases automated robotics technologies to fulfillment centers. Gideon Brothers, a Croatia-based industrial startup backed by TransferWise cofounder Taavet Hinrikus, is another contender. And then there’s robotics systems company GreyOrange; Otto Motors; and Berkshire Grey, which combines AI and robotics to automate multichannel fulfillment for retailers, ecommerce, and logistics enterprises. Fulfillment alone is a $ 9 billion industry — roughly 60,000 employees handle orders in the U.S., and companies like Apple manufacturing partner Foxconn have deployed tens of thousands of assistive robots in assembly plants overseas.

Sign up for Funding Weekly to start your week with VB’s top funding stories.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Global Supply Chain Management For A Rapid And Dynamic Transformation Environment

May 6, 2020   BI News and Info

Large corporations with global operations define their global supply chains with the intent to match supply with demand. More importantly, they consider material flows, established manufacturing and co-manufacturing sites, storage locations, cash flows, technological advantages and restrictions, regulations, tax benefits, and market opportunities.

These supply chain operations run in technology systems of record, planning and analytical applications, and global trade management systems that support solutions to resolve a complex set of regulations to maintain compliance with export and import processes.

Many use global intelligent transfer price supply chain (GITSC) business processes to:

  1. Optimize the landed costs for every product to the customer destination
  2. Define the total costs of ownership
  3. Define the relationship with the market and customers in terms of customer acceptance
  4. Understand and manage customer satisfaction and market share

Here is an example of a GITSC that represents the flow of a product manufactured in China, reprocessed in the European Union, and finally sold in the United States of America.

Figure 1 1024x524 Global Supply Chain Management For A Rapid And Dynamic Transformation Environment

The transfer pricing strategy in this model shows how the product flow gains tax advantages by using operational ports in Singapore and the Netherlands to offload the tax revenue.

In this example, the GITSC model shows that the total global tax accumulated is $ 29.50, for a total accumulated profit of $ 611.50. If the same supply chain did not use the tax advantages of Singapore and the Netherlands, the estimated tax rate would be $ 192.30, with a net profit of $ 448.30. This example GITSC offers an 85% reduction in tax payments and 36.30% net revenue increase.

The following figure shows the GITSC transfer pricing model in blue and the standard supply chain transfer pricing model in yellow.

Figure 2 1024x599 Global Supply Chain Management For A Rapid And Dynamic Transformation Environment

Companies running GITSC models implement business processes supported by IT systems that allow the execution of transactional operations to comply with tax, foreign trade, and customs regulations.

Areas of opportunity for global intelligent transfer price supply chains

The first area of opportunity is the inclusion of an intelligent business system that supports decision-making to manage changes in the supply chain. This is both a challenge and an area of opportunity. These business-intelligence systems rely on the ability to collect transactional data across the supply chain. The objective of increasing the current analytical capabilities of the supply chain is to maximize global after-tax profits by determining the flow of goods and the transportation cost allocation between each of the supply-chain actors. The inclusion of simulation models is the first attempt to stabilize the decision making, with the changes inflicted on the supply chain elements, and the addition of artificial intelligence heuristic expert systems.

The second area of opportunity is to enhance current GITSCs based on lessons learned during the COVID-19 pandemic. The opportunity lies in managing the supply chain ecosystem, not in an isolated and disintegrated industry ecosystem model.

This industrial ecosystem integration is considered the most important priority to enable supply chain ecosystems to efficiently operate in this new reality characterized by fast, dynamic changes.

Join us in our exclusive webinar session on May 19 to get an overview of the SAP Readiness Check tool, which gives a first analysis of your existing SAP ERP application, highlighting key project activities.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

How to Aggregate Global Data from the Coronavirus Outbreak

April 14, 2020   Sisense

As COVID-19 continues to spread, healthcare groups and companies of all kinds are under pressure to provide care in the face of increasing demand. Healthy Data is your window into how data is helping these organizations address this crisis.

As the rapid spread of COVID-19 continues, data managers around the world are pulling together a wide variety of global data sources to inform governments, the private sector, and the public with the latest on the spread of this disease.

In this article, we discuss how this data is accessed, an example environment and set-up to be used for data processing, sample lines of Python code to show the simplicity of data transformations using Pandas and how this simple architecture can enable you to unlock new insights from this data yourself.  Let’s get started.

packages CTA banners Cloud Data Teams How to Aggregate Global Data from the Coronavirus Outbreak

The importance and impact of reliable data

Reliable data of this scale has helped to promote responsible decision-making in workplaces and communities around the world, including legislative action around international travel, the provision of emergency medical resources, the bolstering of financial markets, support for small business owners and proprietors, and medicine and treatment to those who are infected.  

Households and families are also using this data to prepare for the nuances of everyday life that will change as a result of this virus, whether it be securing a paycheck, buying groceries and essentials, playing with their kids, inviting the neighbors over for a barbecue or walking the dog. 

The importance of having properly-prepared data is paramount to the success of our attempts to mitigate and contain the spread of the virus, and the impact of information sharing at this level is truly transformative. But this type of globally aggregated data doesn’t just appear all on its own.  Reliable sources need to first be discovered, retrieved, parsed, and aggregated together before they can be distributed around the world. This is not something we’ve been ready to prepare in such a short period of time until now.

Specific challenges involved with data related to the Coronavirus

Access to a reliable source of contagion data for a global event happening in real-time is not easy to discover. The major repositories that include data on public health issues or disease outbreaks can be accessed through an API.

Often the data is needed to be communicated to the public so quickly after it’s generated that it’s provided via more accessible methods first, leaving the programmatic access via the API for future historical analysis. These methods (PDFs or HTML tables) can be time-consuming to parse and scrape correctly and consistently across a bulk set of examples, yet that wasn’t enough to stop the developer community from getting started.

Since then, a team of researchers affiliated with Johns Hopkins University has been retrieving PDF files posted on the websites of these organizations via programmatic methods and parsing their contents into CSV files stored on a public repository on GitHub. This repository has been starred over 15k times and forked over 7k times. It is being used as source data for engineers around the world to ingest into their data pipelines.

In providing a globally useful dataset they have connected with over 14 data sources from around the world and aggregated them into their data model.  Not all these data sources should be treated the same way, they each have specific needs. Some data sources include confirmed cases while others included presumptive cases.  Some data sources are Excel spreadsheets or HTML tables while others are location pins on Google Maps. And some are in completely different languages than English.  

The team at JHU has done a fantastic job in performing the bulk of the tricky normalization for us, yet they continue to face challenges every day in conforming to the new shapes, sizes, and formats of data that seem to be coming online exponentially as this virus continues to spread across our people.  In the next section we’ll provide the link to access this data and perform some data cleaning and normalization operations using Python and Pandas!

How to set up your own data environment for analyzing COVID-19 data

Data Access

Data access is provided by Johns Hopkins University.  They have built a pipeline that ingests global situation reports from the World Health Organization, translates data from South Korea, China, and Taiwan into English and has access to 10+ other global sources that appear to be quite labor-intensive to retrieve (a lot of web-scraping).  

While some cleaning and normalization has already gone into this dataset, such as converting all timestamps to a UTC time zone and addressing some inconsistencies related to update frequency, there are still plenty of opportunities for us to dive in and focus on cleaning and normalization activities that will unlock real insights.

# Data Publisher Scope Source Format Update Cycle Location
Johns Hopkins University* USA GitHub JSON Daily https://github.com/CSSEGISandData/COVID-19

* Johns Hopkins University is actively parsing daily situation reports from WHO and integrating into their data model for the open-source community to access

Data infrastructure

Here is an example of a simple, cloud-based architecture that is suitable for rapid deployment of a data pipeline.  In the environment created for this article, a Virtual Private Cloud (VPC) containing a Linux EC2 instance, a PostgreSQL database, and an internet gateway was spun up on Amazon Web Services and which was then connected to an external BI dashboarding tool.

The pipeline manager hosts a Python installation and Apache Airflow task scheduler (developed by Airbnb) which operates the data pipeline.  After simple configuration Airflow is up and running and is executing Python scripts itself, writing data to the database every time the source data is updated.  

Meanwhile, the internet gateway allows for external BI tools to connect to the data using a trusted connection so the data can be explored visually, and reports generated for communication of information to others.  This is done without needing to download a local copy of the data. This architecture allows for data in the database and downstream reports to stay up to date automatically.

Head in the Clouds 770x2501 How to Aggregate Global Data from the Coronavirus Outbreak

Processing and cleaning the data using Python and Pandas, writing to SQL database

And finally, we will go through a few simple examples of data cleaning and normalization performed in Python that can be used on this dataset in order to insert into a SQL table and query for valuable insights. The goal here is to format data sources into a common structure for effective bulk processing. 

1. Combine data from multiple .csv files and dropping duplicates that may exist.

import pandas as pd
import os

df = pd.DataFrame()
for filename in files:
   filepath = f'csse_covid_19_daily_reports/{filename}'
   with open(filepath) as f:
 da = pd.read_csv(f)
   df = df.append(da, ignore_index=True, sort=False)df.drop_duplicates(inplace=True)

2. Fill null values with empty strings to prevent rows from being removed from subsequent table transformations.

df['Province/State'].fillna('', inplace=True)df['Country/Region'].fillna('', inplace=True)

3. Sort dates in descending order with most recent at the top.

df.sort_values(['Last Update'], ascending=False, inplace=True)

4. Convert datetimes to date and prepare to group and retrieve the most recent record from each date.

df['Last Update'].apply(lambda x: pd.to_datetime(x).strftime('%Y-%m-%d'))

5. Grouping and aggregating to retrieve the latest report from each day.

df.groupby(['Country/Region', 'Province/State', 'Last Update']) \
   .agg({
       'Province/State': 'first',
       'Country/Region': 'first',
       'Confirmed': 'first',
       'Deaths': 'first',
       'Recovered': 'first',
       'Latitude': 'first',
       'Longitude': 'first'})

6. Convert floating-point decimal fields to type “int.”

for col in ['Confirmed', 'Deaths', 'Recovered']:
    df[col] = df[col].astype(int)

7. Write the data to a SQL database.

df.reset_index(inplace=True)
df.to_sql('cssc_daily_reports', con=connection, index=False,    if_exists='replace')

8. Connect to your SQL database and visualize your newly written data.

In this graphic we see that China’s cases mostly leveled off after 3-4 weeks, while growth in other countries is still ongoing. 

As of 4/10/2020

In this graphic we see the differences in new cases each day between the United States, Germany, Italy, and Spain. 

As of 4/10/2020

Fighting fire with data

As we continue to encounter global challenges at this scale, whether it be the coronavirus or another civilization-halting crisis, the importance of data collaboration across countries and state lines should not be underestimated. Not only because data wins debates, but because good data converts talk into consensus and action.

packages CTA banners Cloud Data Teams How to Aggregate Global Data from the Coronavirus Outbreak

Govind Rajagopalan is a Senior Engineering Manager at Sisense. He has over 15 years of experience across both engineering and management roles with diverse companies, domains, teams, and technologies. He is excited to teach, help his teammates thrive, and have fun improving his craft.

Let’s block ads! (Why?)

Blog – Sisense

Read More

How COVID-19 Exposed Weaknesses In The Global Supply Chain

April 11, 2020   BI News and Info
 How COVID 19 Exposed Weaknesses In The Global Supply Chain

It’s flooding the news everywhere: More medical equipment is desperately needed as governments around the world scramble to obtain more masks, ventilators, gloves, and sanitizer to support the latest hot spots impacted by the coronavirus. Nobody seems to know where to get these supplies, nor do they understand what the real demand and supply actually are.

At the same time, we see specific consumer goods items in extremely high demand. It is amazing that when it comes down to necessities, we all appear to need about 20 to 30 items, such as pasta, eggs, flour, milk, dried beans, and toilet paper. We have seen a huge demand for these items, which has led to empty shelves in retail stores.

China is also trying to get back to “normal” and restart production. However, there is a lack of demand now for many of the “non-essential” items they are geared up to produce such as cars, fashion, or high-tech products – especially when it relates to exports to Europe and the U.S.

Supply chains are broken

Just looking at these examples, it’s obvious that supply chains are broken.

We are accustomed to a well-oiled global supply chain that promises “same-day delivery” of many products. Local events such as extreme weather, wildfires, and earthquakes always had an impact on supply chains, but even if they bent, they did not break. This time it’s different – this time supply chains truly are broken.

There is lack of visibility, there is lack of collaboration, and there is lack of coordination.

Because almost all supply chains are still based on a transactional, reactive model, when we have an unexpected spike or drop in demand, it takes time for it to ripple through the multi-tiered supply chain. Therefore, it can take weeks and even months to adjust supply chains – and it will take weeks and months to get inventory out of the supply chain once we enter the “new normal.”

All in all, companies and the supply chains were not prepared for a pandemic.

We can rebuild them

There are some bright spots of positive news. Even if certain products are not always available on retail shelves, there isn’t a crisis in food supply or consumer products.

Also, many companies around the world started to re-purpose their production, with many brewing companies and distilleries producing hand sanitizers, fashion companies producing masks, and automotive companies looking to produce ventilators.

These are all honorable examples of how to make these kinds of changes on the fly, as difficult as they may be. We all expect high quality in these products (especially when it comes to medical devices) and many of these products have different regulations by country. Even if a plant can be re-purposed and adjust fast, what about the components for these products? Can suppliers deliver to all of these new production locations? The answer is yes, as it is proven every day.

This tells us that, despite the fact that supposedly “well-oiled” supply chains are broken, it is possible to react quite fast, re-purpose production, and provide the help that is desperately needed.

Technology is available to support companies to get this done. There are collaboration capabilities that ensure the integrity of the processes and the quality of the product. And yes, this can be set up on the fly – or at least in a very short time.

So, to answer the question, what’s wrong with the supply chains? They are broken, and they need to evolve to a new normal.

This will not be a mere fix of current processes and solutions. It will require a true change to a more real-time, data-driven, and predictive model.

I predict that all of the discussions about Industry 4.0 (the Fourth Industrial Revolution) and the digital transformation of industries will be accelerated as a result of the COVID-19 crisis.

To learn how supply chain leaders minimize risk and maximize opportunities, attend this upcoming Webinar with Richard Howells from SAP and Ben Wright, associate editor from Oxford Economics, to review a recent research study of 1,000 COOs and chief supply chain officers.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Global Food Solutions Puts Affordable, Nutritious Meals in the Hands of U.S. Students

February 22, 2020   NetSuite
5.6.17 1447 Global Food Solutions Puts Affordable, Nutritious Meals in the Hands of U.S. Students

Posted by Hayley Null, industry marketing lead manufacturing, food and beverage

Opportunity and challenge often go hand-in-hand — just ask Global Food Solutions. The Hauppage, N.Y.-based food distributor found itself at the crux of enormous opportunity a decade ago when the Obama Administration tightened school food regulations for the first time in a generation.

CEO Michael Levine saw an opening to take Global Food Solutions—then a young regional company looking to make waves with a healthy, sustainable approach to mass food production—to a new level. Levine knew that the big names in the space, companies like General Mills, Kellogg’s and Marriott, weren’t prepared for the changes that were coming.

“There was this massive demand that was about to hit the marketplace for better nutrition products on a massive scale for the school food service industry,” said Levine. “We saw this as a very unique opportunity to capitalize.”

So Global Food Solutions pounced. It started adding items to its list of product SKUs, adding school districts, and forming relationships with more distribution partners. Over the next several years, it found itself expanding across the country, and today it provides lunches for more than 5,000 schools, most located in underprivileged urban areas where student lunches are subsidized by the federal government.

Growth Breeds Complexity

But as the company grew, and its business became increasingly complex, its backend systems were not up to the task. It had been relying on a combination of QuickBooks and Excel spreadsheet to manage financials, and Global Food Solutions’ growth exposed the limitations of that setup.

“We had these pockets of software that were doing specific tasks in the business for us, but none of them were talking to each other. It was okay when we only had a handful of SKUs and it was one manufacturer and we were only shipping in one state and it was very controlled,” Levine said. “But as we expanded our model, we began to push these systems to — actually, not to, but well beyond — a breaking point.”

The problems this created were ubiquitous. The company had distribution challenges; it struggled to track inventory; and its workflow from orders to customer service were choppy and disconnected. Levine recalls the company having to put band-aids on multiple areas where the siloed systems couldn’t keep up with the business.

“It absolutely was not going to be a sustainable solution by any means,” he said. “It was almost kind of running away from us, and we needed to find a way to get a handle on it because we knew that we couldn’t scale to the next level and do all of the things we wanted to do without the right software provider and the right services.”

The Search for an Answer

Finally, in early 2018, Global Food Solutions began to search for a solution that would help the company bring all of those pieces together. It looked at several options, including a newer QuickBooks Premier product and SAP for Wholesale Distribution Products. Ultimately, however, it turned to NetSuite, not only because of its current capabilities, but the flexibility it would offer going forward.

This was critical, Levine said, because the business was looking at doing some new things. For example, it wanted to create a workflow that would support the assembly of meal kits, which would require multiple components of the business to be tightly integrated. Those kits would be delivered complete and handed out as after school snacks for economically disadvantaged students who might not have other snack options presented to them.

“The NetSuite system was able to do those things with ease,” said Levine. “It’s built in a way that is ready to expand with us as we get there. The other software companies that we evaluated, they were just not able to do that.”

With SuiteSuccess, Global Food Solutions worked closely with NetSuite and had its system deployed and live in 85 days, allowing it to work with the system before the start of the following school year. Levine said the NetSuite Professional Services team made a number of recommendations throughout the implementation process that have really helped the business take full advantage of the technology.

Swift Results

In the short time the company’s been on NetSuite, it has already seen significant improvements. The company has cut the time it takes to receive an order, create a PO and confirm with vendors by 75%. For the first time, it has clear visibility into inventory, with data delivered in clear dashboard reports. And because its data is in a single system, Global Food Solutions has cut the time it takes to resolve simple customer inquiries that once took up to 30 minutes to mere seconds.

“It’s just amazing how we feel we’re able to get so much more time back to focus on growth and creation for our business and not worrying about things being wrong,” said Levine.

NetSuite also enables Global Food Solutions to achieve the scale needed to serve so many lunches despite having just 20 employees. The company relies on manufacturing partners for production and for tapping into their buying power to purchase ingredients, and also works with a network of distribution partners.

NetSuite has become the secret ingredient to making Global Food Solutions’ network work together seamlessly. It’s integrated with the company’s CRM, enabling emails to be easily attached to customer records, and is exploring an integration with Box to bring more data into view.

Far From Done

Much more is on the horizon. The company plans to tap NetSuite’s ability to bring automation into its workflow process. It wants to expand upon its demand forecast and planning capabilities, set up auto-generated purchase orders that would maintain stock levels. It also is looking to integrate with NetSuite partner Tesorio, which will help it improve its management of cashflow and receivables.

For now, however, Levine is simply enjoying the contrast between the past experience of running the business on disconnected, stitched together components, and enjoying the integrated, data-rich environment NetSuite has enabled.

Said Levine: “It has been an incredible breath of fresh air.”

Posted on Wed, February 19, 2020
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

Read More
« Older posts
  • Recent Posts

    • Building AI for the Global South
    • Dapper Duo
    • AI Weekly: These researchers are improving AI’s ability to understand different accents
    • Why Choose RapidMiner for Your Data Science & Machine Learning Software?
    • How to Use CRM Integration to Your Advantage – Real World Examples
  • Categories

  • Archives

    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited