GE Aviation and Teradata Expand Partnership

CustomerSuccess social share smallest GE Aviation and Teradata Expand Partnership

GE Aviation and Teradata (NYSE: TDC) today announced that GE Aviation will become the exclusive provider of Teradata products and services for commercial aviation markets, providing the world’s biggest airlines with a single, comprehensive framework that combines high-performance analytics in the cloud from Teradata with edge-connectivity services from GE Aviation.
Teradata and GE Aviation previously announced a partnership to jointly provide products and services to commercial aviation markets. The two companies are extending this relationship to provide airlines with wide-ranging operational insights that power impactful business decisions.
“We can provide the best analytic environment for airline customers by adding Teradata’s powerful analytic solutions with built-in support for hybrid cloud environments,” said John Mansfield, Chief Digital Officer of GE Aviation. “This partnership enables us to bring a holistic framework of enterprise data and business solutions to airlines.”
An example of this would be GE Aviation’s FlightPulse application, which will include a connection to Teradata software to enable access to high-performance analytics enterprise-wide. FlightPulse automatically merges complex aircraft data with crew schedules, allowing commercial pilots to visualize personal analytics and conduct their own exploration. Empowered to conduct their own analysis and peer comparisons, pilots discover areas to optimize operations and efficiency, while reducing risk, fuel consumption and carbon emissions. By combining this operational data with passenger-based data within the Teradata system, an airline is able to create a holistic enterprise view of its data and extract meaningful business outcomes.
“Operationalizing the analytics that provide high-impact, trusted business outcomes is one of the hardest things to pull off in an enterprise-scale environment,” said Martyn Etherington, Chief Marketing Officer at Teradata. “The first step is often making sure various data silos can communicate, and our partnership with GE Aviation is designed to make this happen almost automatically for the airline industry. Ensuring that these large, global customers have quick access to a powerful, yet flexible, analytics solution like the Teradata Analytics Platform is the second step, and we are delighted that GE Aviation will be offering this product to its customers.”
“By expanding our partnership, the aviation market will gain an analytic solution that delivers comprehensive insights. We could not be more excited for the insights we are about to unlock for our customers,” said Andrew Coleman, Chief Commercial Officer for GE Aviation’s Digital Solutions business. “It became clear that, for data and analytics, Teradata excels at the size and scale required by a global airline.”
The Teradata Analytics Platform is a key offering from Teradata, the cloud-based data and analytics leader. It delivers powerful analytic functions and engines, that can be used with multiple data types. Coupled with the tools and languages that each user individually prefers to use, the Teradata Analytics Platform is the future of analytics, delivering secure, scalable, high-performance analytics in the cloud, on-premises or both.
For airline customers that are already using GE Aviation for operations, assets, and network management, adding the Teradata Analytics Platform would deliver enterprise-wide insights with results ranging from improved flight operations and predictive maintenance to increased operational efficiencies and higher customer satisfaction.
Via its extended partnership, Teradata and GE Aviation are particularly well positioned to enhance the aviation analytics market, which Markets and Markets Research estimates is a $ 2.16 billion industry that will grow to an estimated $ 4.23 billion by 2021.

About GE Aviation:
GE Aviation is part of GE (NYSE: GE), the world’s Digital Industrial Company, transforming industry with software-defined machines and solutions that are connected, responsive and predictive. With people, services, technology and scale, GE delivers better outcomes for customers by speaking the language of industry.

Let’s block ads! (Why?)

Teradata United States

They both smell the same

 They both smell the same

Via fb


About Krisgo

I’m a mom, that has worn many different hats in this life; from scout leader, camp craft teacher, parents group president, colorguard coach, member of the community band, stay-at-home-mom to full time worker, I’ve done it all– almost! I still love learning new things, especially creating and cooking. Most of all I love to laugh! Thanks for visiting – come back soon icon smile They both smell the same

Let’s block ads! (Why?)

Deep Fried Bits

5 Resources to Get Ready for Microsoft’s Business Apps Summit

Biz Apps blog 1 300x225 5 Resources to Get Ready for Microsoft’s Business Apps Summit

Our team is headed to Microsoft’s Business Applications Summit next week, the “place for Dynamics 365, Power BI, Excel, PowerApps, and Microsoft Flow users to connect, collaborate, and pack in as much learning as possible.” To get ready for the event, we’re sharing five of our favorite Business Applications blogs and videos that we’ve published over the past few months.

1. Spring 2018 Update Webinar: What’s New in PowerApps and Flow

The Spring 2018 release brought about some big changes to the Business Applications Platform including PowerApps, Flow, and the Common Data Service. This session focuses specifically on changes to PowerApps for Applications and Flow. Our speakers will dive into how these tools are modernizing business processes across Dynamics 365 applications while offering up low and no code solutions to help transform how people work. [WATCH WEBINAR]

071918 1711 5Resourcest1 5 Resources to Get Ready for Microsoft’s Business Apps Summit

2. Displaying Dynamics 365 Data in your PowerApps Application

In this blog post, we will assume that you are building a PowerApps app based on the Opportunity entity in Dynamics 365. The use case is that you want a sales person to be able to view “today’s” appointments and view only their own opportunities (“My Opportunities”). [READ BLOG]

071918 1711 5Resourcest2 5 Resources to Get Ready for Microsoft’s Business Apps Summit

3. How to Integrate Power BI with Dynamics 365 for Finance and Operations

The integration between Dynamics 365 for Finance and Operations and Power BI give users the capability to personalize workspaces with tiles from and they can add links to the reports hosted in This helps users to access the Power BI reports directly from Dynamics 365 for Finance and Operations. [READ BLOG]

071918 1711 5Resourcest3 5 Resources to Get Ready for Microsoft’s Business Apps Summit

4. The 5 Best Things About Microsoft Flow Video

Our Dynamics 365 University education programs have something for everyone whether you’re new to Dynamics 365 or are a power user, admin, or developer! Hear from one of our Training Consultants, Avni Pandya, as she talks about our Citizen Developer course which will teach you about low code and no code tools that can help you create Dynamics 365 solutions without needing to be a developer. One of those great tools is Microsoft Flow. [WATCH VIDEO]

071918 1711 5Resourcest4 5 Resources to Get Ready for Microsoft’s Business Apps Summit

5. Row-Level Security in Power BI with Dynamics 365

Power BI offers a suite of security features to help restrict data. One way to do this is with Row-level security. Row-level security (RLS) with the Power BI Desktop can be used to restrict data access for specific users, filtering data at the row level, and defining filters within roles. In this blog, learn how to set-up this feature in Power BI and an example of how you can use it in Dynamics 365. [READ BLOG]

071918 1711 5Resourcest5 5 Resources to Get Ready for Microsoft’s Business Apps Summit

Now you can see why we are so excited for Microsoft’s Business Applications Summit. These applications are powerful and just keep getting better! We’ll make sure to keep you posted on all things Biz Apps via this blog so be sure to SUBSCRIBE to get our daily post sent straight to your inbox!

Happy D365’ing!

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Amazon’s Matt Wood on the major takeaways from AWS Summit 2018

 Amazon’s Matt Wood on the major takeaways from AWS Summit 2018

On Wednesday, just days ahead of Google’s Cloud Next conference, Amazon hosted its annual Amazon Web Services cloud computing conference, AWS Summit, at the Jacob K. Javits Convention Center in New York City. It didn’t hold back.

SageMaker, the Seattle company’s full-stack machine learning platform, got two major updates: SageMaker Streaming Algorithms and SageMaker Batch Transform. The former, which is available for neural network models created with Google’s TensorFlow, lets customers stream data from AWS’ Simple Storage Service (S3) directly into SageMaker GPU and CPU instances. The latter allows them to transfer large training datasets without having to break them up with an API call.

In terms of hardware, Amazon added Elastic Compute Cloud (EC2) to its Snowball Edge system, an on-premises Intel Xeon-based platform for data processing and collection. And it enhanced its local storage, compute, data caching, and machine learning inference capabilities via AWS Greengrass, AWS Lambda, and Amazon S3, enabling new categories of virtualized applications to run remotely in work environments with limited connectivity.

On the services front, Amazon Transcribe’s new Channel Synthesis tool merges call center audio from multiple channels into a single transcription, and Amazon Translate now supports Japanese, Russian, Italian, Traditional Chinese, Turkish, and Czech. Amazon Comprehend, Amazon’s natural language processing services (NLP), now boasts improved text analysis thanks to syntax identification.

Finally, Amazon revealed a slew of new and extended partnerships with major clients. Fortnite developer Epic Game said it’s building “new games [and] experiences” on AWS; 21st Century Fox will use Amazon’s cloud service for the “vast majority” of on-demand content delivery; Major League Baseball and Formula 1 are planning to tap AWS’ AI tools for real-time data analytics; and Celgene will leverage Amazon’s machine learning platform to expedite drug analysis and validation.

It’s a lot to take in. For a bit of context around this week’s announcements, I spoke with Dr. Matt Wood, general manager of artificial intelligence at AWS, who shed light on Amazon’s momentum in cloud computing, overarching trends in AI, and the problem of bias in machine learning models and datasets.

Here’s a transcript of our interview, which has been edited for length and clarity.

VentureBeat: Today, you announced SageMaker Streaming Algorithms, which allows AWS customers to train machine learning models more quickly. What was the motivation? Was this something for which customers expressed a deep desire?

Matt Wood: There are certain things across AWS that we want to invest in, and they’re the things that we think aren’t going to change over time. We’re building a business not for one year, 10 years, or 50 years, but 100 years — far in excess of when I’m going to be around and in charge of it. When you take that long-term view, you tend to put money not into the things you think are going to change, but into the things you think are going to stay the same.

For infrastructure, and for AWS — and this is true for machine learning as well — cost is really a big driver of that … It’s impossible for us to imagine our customers saying that they want the service to be more expensive, so we go out of our way to drive down costs.

A really good example is something we announced a couple of years ago that we call Trusted Advisor. Trusted Advisor is a feature you can turn on inside your AWS account that automatically, without you having to do anything, makes recommendations about how to reduce your AWS bill. We delivered over $ 300 million in annual savings to customers that way.

These are some of the advantages that the cloud provides, and they’re advantages that we want to maintain.

VentureBeat: On the client side of things, you announced a lot of strategic partnerships with Epic, Major League Baseball, and others, almost all of which said they’ll be using AWS as their exclusive cloud platform of choice. So what’s the movement there? What’s the feedback been like so far?

Wood: We see a lot of usage in sports analytics. Formula 1 chose AWS as their machine learning platform, Major League Baseball chose AWS as their machine learning platform, and the National Football League chose AWS as their machine learning platform. The reason for that is they want to drive better experiences for their viewers, and they see machine learning as a key piece of the advanced next-generation statistics they want to bring into their production environment — everything from route prediction [to] stat prediction.

That’s just one big area. Other areas are pharmaceuticals and health care. We have HIPAA compliance, which allows [our] customers to work with health care workloads, so we see a lot of momentum in disease prediction. We do diabetic retinopathy prediction, readmission prediction — all those sorts of things.

To that end, we announced [this week that] Bristol Myers Squibb is using SageMaker to accelerate the development of the innovative medicine that they build. Celgene is another really good example — Celgene actually runs Gluon, which is our machine learning library, on top of SageMaker, and they take advantage of the P3 GPUs with the Nvidia Volta under the hood. So, you know, that’s a really good example of the customer that has materially accelerated the ability to be able to bring drugs to market more quickly and more safely.

VentureBeat: Amazon offers a lot of machine learning services to developers, like Rekognition — your computer vision platform — and Amazon Translate. But you have a lot of competition in the space from Google, Microsoft, and others. So how are you differentiating your APIs and services from the rest out there?

Wood: Candidly, we don’t spend a [lot of] time thinking about what our competitors are up to — we tend to be way more customer-focused. We’ve launched 100 new services and features since Reinvent 2017, and no other provider has done more than half of that. I would say 90-95 percent of what we’ve launched has been directly driven by customer feedback, and the other 5-10 percent is driven by our attempts to read between the lines and try to figure out what customers don’t quite know to ask for yet.

SageMaker is really helpful in cases where customers have data which they believe has differentiating value. Then, there are application developers who may not have a lot of training data available or who just want to add some level of intelligence to their application quickly — that’s where Rekognition, Rekognition Video, Transcribe, Comprehend, Polly, Lex, and Translate come in.

We joke about this, but our broader mission is really to make machine learning boring and totally vanilla, just part of the course of doing business and another tool in the tool chest. Machine learning, we kind of forget, used to be a huge investment requirement in the hundreds of millions of dollars to get up and running. It was completely out of reach, and I think we’ve made huge progress in a very, very short amount of time.

We have a saying in Amazon: It’s still day one for the internet. And for machine learning, we haven’t even woken up and had our first cup of coffee yet. But there’s a ton of excitement and momentum. We have tens of thousands of active developers on the platform and 250 percent growth year over year. Eight out of 10 machine learning workloads run on AWS — twice as many as any other provider. And customers really value that focus on continuous platform improvement. I’m excited about where we’re headed.

VentureBeat: Voice recognition and natural language processing, in particular, are extremely competitive spaces right now. I know you said you don’t think too much about what your competitors are doing, but what kind of gains have you made relative to the market?

Wood: These services are off to a great start, and we see contact centers being a really big area.

A lot of customers use Amazon Lex as their first point of contact. The National Health Service (NHS) in the U.K. ran a pilot where they introduced a Lex chatbot, and it was able to handle 40 percent of their call volume. This is the centralized health provider in all of the U.K., so that’s really meaningful in terms of patients getting to talk to somebody more quickly, or NHS being able to operate its contact center more efficiently.

[This week] we announced Channel Splitting, where we were able to take call center recordings — two recordings, one of the agent and one of the customer — in the same file, split out the channel, transcribe them both independently, and merge the transcripts together. You get a single file out, and then you can take that and you can pass it off to Comprehend to find out what’s going on the in the conversation and what people were talking about. You can also run compliance checks to see if contact center agents are saying scripts exactly as they’re designed to be said.

From an efficiency perspective, large contact centers are expensive and difficult for most organizations to run, and from Lex through to the management, compliance, analytics, and insight you can get from the data there, we think they’re a really compelling AWS use case.

VentureBeat: Shifting gears a bit. You mentioned inclusion a bit earlier, and as you probably know, with respect to computer vision, we’ve got a long way to go — facial recognition is an especially difficult thing for developers and infrastructure providers to get right. So how do you think it might be tackled? How can we improve these algorithms that, for example, appear to be biased against people of color and certain ethnicities and races?

Wood: It’s the classic example of garbage in, garbage out. If you’re not really careful about where you get your data from and if you accidentally with good intentions introduce some selection criteria on the data in the perfect representative set, you’re going to introduce inaccuracies. The good news is that with machine learning, you can identify, measure, and systematically reduce those inaccuracies.

One of the key benefits of our services like SageMaker is the quicker you can train and retain models, the quicker you can identify areas of accuracy and start to narrow down the inaccuracies. So in that respect, any investment that we make, such as SageMaker Streaming Algorithms, contributes to spinning that flywheel faster and allows developers to iterate and build more sophisticated models that overcome some of the noise inside the data.

Basically, investment in our frameworks allows developers to build more sophisticated models, train models more quickly, and operate more efficiently in a production environment. All of it helps.

Let’s block ads! (Why?)

Big Data – VentureBeat

10 Techniques to Boost Your Data Modeling

With new possibilities for enterprises to easily access and analyze their data to improve performance, data modeling is morphing too. More than arbitrarily organizing data structures and relationships, data modeling must connect with end-user requirements and questions, as well as offer guidance to help ensure the right data is being used in the right way for the right results. The ten techniques described below will help you enhance your data modeling and its value to your business.

1. Understand the Business Requirements and Results Needed

The goal of data modeling is to help an organization function better. As a data modeler, collecting, organizing, and storing data for analysis, you can only achieve this goal by knowing what your enterprise needs. Correctly capturing those business requirements to know which data to prioritize, collect, store, transform, and make available to users is often the biggest data modeling challenge. So, we can’t say it enough: get a clear understanding of the requirements by asking people about the results they need from the data. Then start organizing your data with those ends in mind.

2. Visualize the Data to Be Modeled

Staring at countless rows and columns of alphanumeric entries is unlikely to bring enlightenment. Most people are far more comfortable looking at graphical representations of data that make it quick to see any anomalies or using intuitive drag-and-drop screen interfaces to rapidly inspect and join data tables. Data visualization approaches like these help you clean your data to make it complete, consistent, and free from error and redundancy. They also help you spot different data record types that correspond to the same real-life entity (“Customer ID” and “Client Ref.” for example), to then transform them to use common fields and formats, making it easier to combine different data sources.

Data Preparation Kit white 770x250 10 Techniques to Boost Your Data Modeling

3. Start with Simple Data Modeling and Extend Afterwards

Data can become complex rapidly, due to factors like size, type, structure, growth rate, and query language. Keeping data models small and simple at the start makes it easier to correct any problems or wrong turns. When you are sure your initial models are accurate and meaningful you can bring in more datasets, eliminating any inconsistencies as you go. You should look for a tool that makes it easy to begin, yet can support very large data models afterward, also letting you quickly “mash-up” multiple data sources from different physical locations.

4. Break Business Enquiries Down into Facts, Dimensions, Filters, and Order

Understanding how business questions can be defined by these four elements will help you organize data in ways that make it easier to provide answers. For example, suppose your enterprise is a retail company with stores in different locations, and you want to know which stores have sold the most of a specific product over the last year. In this case, the facts would be the overall historical sales data (all sales of all products from all stores for each day over the past “N” years), the dimensions being considered are “product” and “store location”, the filter is “previous 12 months”, and order might be “top five stores in decreasing order of sales of the given product”. By organizing your data using individual tables for facts and for dimensions, you facilitate the analysis for finding the top sales performers per sales period, and for answering other business intelligence questions as well.

5. Use Just the Data You Need, Rather Than All the Data Available

Computers working with huge datasets can soon run into problems of computer memory and input-output speed. However, in many cases, only small portions of the data are needed to answer business questions. Ideally, you should be able to simply check boxes on-screen to indicate which parts of datasets are to be used, letting you avoid data modeling waste and performance issues.

6. Make Calculations in Advance to Prevent End User Disagreements

A key goal of data modeling is to establish one version of the truth, against which users can ask their business questions. While people may have different opinions on how an answer should be used, there should be no disagreement on the underlying data or the calculation used to get to the answer. For example, a calculation might be required to aggregate daily sales data to derive monthly figures, which can then be compared to show best or worst months. Instead of leaving everyone to reach for their calculators or their spreadsheet applications (both common causes of user error), you can avoid problems by setting up this calculation in advance as part of your data modeling and making it available in the dashboard for end users.

7. Verify Each Stage of Your Data Modeling Before Continuing

Each action should be checked before moving to the next step, starting with the data modeling priorities from the business requirements. For example, an attribute called the primary key must be chosen for a dataset, so that each record in the dataset can be identified uniquely by the value of primary key in that record. Suppose you chose “ProductID” as a primary key for the historical sales dataset above. You can verify that this is satisfactory by comparing a total row count for “ProductID” in the dataset with a total distinct (no duplicates) row count. If the two counts match, “ProductID” can be used to uniquely identify each record; if not, look for another primary key. The same technique can be applied to a join of two datasets to check that the relationship between them is either one-to-one or one-to-many and to avoid many-to-many relationships that lead to overly complex or unmanageable data models.

8. Look for Causation, Not Just Correlation

Data modeling includes guidance in the way the modeled data is used. While empowering end users to access business intelligence for themselves is a big step forwards, it is also important that they avoid jumping to wrong conclusions. For example, perhaps they see that sales of two different products appear to rise and fall together. Are sales of one product driving sales of the other one (a cause and effect relationship), or do they just happen to rise and fall together (simple correlation) because of another factor such as the economy or the weather? Confusing causation and correlation here could lead to targeting wrong or non-existent opportunities, and thus wasting business resources.

9. Use Smart Tools to Do the Heavy Lifting

More complex data modeling may require coding or other actions to process data before analysis begins. However, if such “heavy lifting” can be done for you by a software application, this frees you from the need to learn about different programming languages and lets you spend time on other activities of value to your enterprise. A suitable software product can facilitate or automate all the different stages of data ETL (extracting, transforming, and loading). Data can be accessed visually without any coding required, different data sources can be brought together using a simple drag-and-drop interface, and data modeling can even be done automatically based on the query type.

10. Make Your Data Models Evolve

Data models in business are never carved in stone because data sources and business priorities change continually. Therefore, you must plan on updating or changing them over time. For this, store your data models in a repository that makes them easy to access for expansion and modification, and use a data dictionary or “ready reference” with clear, up-to-date information about the purpose and format of each type of data.

Better Data Modeling Leads to Greater Business Benefit

Business performance in terms of profitability, productivity, efficiency, customer satisfaction, and more can benefit from data modeling that helps users quickly and easily get answers to their business questions. Key success factors for this include linking to organizational needs and objectives, using tools to speed up the steps in readying data for answers to all queries, and making priorities of simplicity and common sense. Once these conditions are met, you and your business, whether small, medium, or big, can expect your data modeling to bring you significant business value.

Data Preparation Kit white 770x250 10 Techniques to Boost Your Data Modeling

Tags: |

Let’s block ads! (Why?)

Blog – Sisense

How Will Travel Look In A Digital World?

Part 1 in the “Advanced Integration” series

This blog is the first of a series that will drill down into technologies for advanced integration and enhanced intelligence and potential applications. We will highlight these capabilities, along with details on architectural design and evolutions in available underlying products and technology components. Here, we are exploring how an integrated platform could support a personal travel assistant.

The digital transformation journey for travel

The significant ongoing growth in global travel has triggered the need for more advanced travel-planning and execution tools to supplement existing solutions, which individually cover only certain aspects of the overall itinerary.

Today’s platform technology makes it possible to build a highly automated travel-planning and execution-monitoring solution. The technology can combine features from existing applications with intelligent microservices like satellite services, Internet of Things (IoT), machine learning, and blockchain technologies and data sources.

The result is an intelligent personal travel assistant, offering the traveler all data required in a fully automated way, in real time – from planning to execution, including routing, bookings, scheduling, checkouts, and final cost settlements. Imagine how a personal travel assistant could simplify the life of the frequent traveler, whether for business or leisure. The additional benefits are obvious in terms of administrative cost savings, and also for better alignment with various carriers and optimization of occupancies. For energy-intensive transportation providers, the result could also translate into more environmentally sustainable practices through resource optimization.

Travel-planning solutions today

Today, organizing a trip requires a lot of human judgment and manual steps during scheduling and execution. Multiple data sources need to be consulted. Today’s travel management solutions cover only the needs of the traveler in specific areas, like route planning, booking, or expense management; they offer little integration across the various areas. For example, a route planner provides travel schedule alternatives, but limited functionality in reservation and ticketing. Likewise, booking systems have limited functionality in automatic rebooking or subsequent payment adjustments, requiring lots of manual intervention.

Access to all relevant travel-data sources in real time is a prerequisite to produce qualified and updated travel schedules throughout the travel journey. These include carrier schedules (flights, train); lodging, dining, and entertainment (hotels, restaurants, performance venues); travelers’ profiles (route preferences, loyalty programs); and access to supporting services (hotlines, insurance). This is true of existing travel management solutions.

Elevating automation in travel planning

However, using historical traveler patterns, machine learning can now identify one or more routes, as well as lodging and entertainment suggestions, and propose these through the personal travel assistant, with the corresponding time and cost implications. The traveler can then select the best suitable option and by so doing, constantly update the machine learning engine. The engine can also release bookings and reservations in due time and issue ticketing, considering time-window restrictions, penalty clauses, soft/hard bookings judgments, etc. Contracts with various providers are used as a source.

Satellite services and IoT allow the location of the traveler to be monitored throughout the journey, as well as the location of each carrier and deviations from the original schedule. The machine learning engine can anticipate potential conflicts and reschedule the trip to a best alternative route going forward, making all necessary adjustments in bookings and reservations.

Ticketing and other required verification documents can be pushed to the personal travel assistant upon confirmation of the various carriers. Payment settlement follows through various channels (such as bank transfer, credit card, blockchain) upon confirmation of carrier usage, either detected through satellite and IoT data sources or confirmed manually. Final settlement of all costs and expenses can be fully automated, sharing the relevant data with standard travel and expense applications.

The foundation: a consolidated data platform

The foundation of the new solution is a data platform consolidating all relevant data sources, as well as offering required security capabilities and mobile access. For example, the future state could include:

  • Booking of travel and lodging followed automatically by route scheduling
  • Issuance of tickets and other documents upon confirmation with contractual best alternatives
  • Automatic settlement of payments and expense declarations
  • Planning and booking of transfers to and from the airport

Making this a reality, however, will require meaningful change to all existing compliance and authorization barriers. The feasibility requires flexibility in changing bookings in an automated way, as well as semi-authorized payment settlements and adjustments.

The feasibility

However, if those barriers to entry could be overcome through policy change, the improvements in the travel experience are considerable, including:

  • Time and cost saving during planning, rescheduling, and execution of trips, eliminating all paperwork and phone calls
  • Minimized hiccups and waiting times during travel, since all data related to availabilities, schedules, and calamities will be up-to-date and available in real time
  • Up-to-date information provided to the traveler, allowing for a smooth trip with minimal disruption or unexpected delays
  • Better visibility for all stakeholders, including travel agencies, carriers, hotels, and employers, on travel costs and faster settlement without errors

Connect with Frank on LinkedIn.

Connect with John on LinkedIn.

Let’s block ads! (Why?)

Digitalist Magazine

Goodwill of Silicon Valley Readies for Coming Tsunami

Posted by Barney Beal, Content Director

One might not expect Goodwill to be in the car detailing and mattress reconstruction business.

That’s because Goodwill has what may seem like a straightforward business model – accept donated items, sell them and use the funds to pay for job placement, training and other community-based programs.In fact, there is a great deal of complexity in how Goodwill centers operate and for Goodwill of Silicon Valley, with some of the highest housing costs in the country, a number of factors have combined to make dealing with that complexity vital to survival.

“One of the concerns I have with Goodwill is that commodity prices have gone down while the minimum wage has gone up,” creating lower prices for goods and higher costs, said Christopher Baker, CFO of Goodwill of Silicon Valley. “There is a tsunami coming for Goodwill. The only way to get through this is optimizing our business.”

Ridding Complexity to Stave off Catastrophe

That’s no easy task because, like many other Goodwill branches, Goodwill of Silicon Valley is a complex operation. It runs 18 different stores plus a boutique store, an ecommerce site (, 20 donations sites, after market salvage, car detailing and, until recently, mattress reconstruction.

Moreover, dealing with donations adds complexity most retailers don’t need to deal with.

“Donated goods area is a different beast,” Baker said. “We have a good-better-best pricing system. In the donated goods business, we deal with 12 million items per year. Each is unique.”

Goodwill of Silicon Valley also tracks who sorted each item and how long items are kept on the shelves. Each item has a four-week rotation where it continues to get discounted and eventually pulled off the shelves and sold in bulk, which is treated differently, sold per pound. Properly tracking what’s sold to salvage can make a huge difference to the bottom line.

“It’s a complex organization in how we generate invoices and manage the separate businesses,” Baker said. “We were doing that with a lot of software. I was using an outside consultant [to help manage it]. He’s a bright guy but I was beginning to wonder who the customer was.”

Leaning on a Technology Background

Baker, who previously worked in technology, and the rest of management at Goodwill of Silicon Valley recognized that for the organization to continue to survive it needed to better manage all of its information and the existing processes. It was running Microsoft Dynamics GP for ERP and RMS for point of sale (POS).

“We had a lot of software, but I still needed a lot more from a CRM and planning perspective,” Baker said. “The cost of doing this and integrating all these platforms together didn’t make sense.”

It deployed NetSuite to manage ERP, including multiple subsidiaries, inventory management, CRM, omnichannel POS, procure-to-pay and is using the NetSuite Bronto commerce marketing tool. NetSuite was able to handle all of Goodwill’s complexity. The organization was flexible about adopting some of the best practices NetSuite has learned over its many years and was able to customize the system to its needs in others, Baker said. Notably, Goodwill of Silicon Valley created a customized bill of lading packing ticket that employees can simply hand over to drivers making a pick up, creating significant time savings in manual entries. In fact, simplifying processes for the workforce was another critical point.

“We have over 100 percent turnover,” Baker said, noting that many employees take positions at Goodwill to help get back on their feet. “We want to place people in meaningful employment going forward. It was critical to have simple tasks and limit forms.”

Rapid ROI

Already NetSuite is paying dividends. New product gross margins went from 39 percent to 47 percent, which accounts for roughly $ 100,000 to the bottom line, while gross margins on salvage grew from 50 to 54 percent. The visibility into inventory and finances has also allowed Baker to take a more hands-on approach.

“Each week, I can go and see if there’s a large variance in the store even into the item that’s missing,” he said. “I can send an email to store managers each week. They know we’re watching them now.”

The result is an organization well positioned to weather the coming tsunami.

Watch this short video to see the impact Goodwill of Silicon Valley is making in its community.

Learn about more about NetSuite for nonprofits, and stop by booth 318 at the Goodwill Summer Conference to meet the NetSuite team in person.

Posted on Thu, July 19, 2018
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

Voigt notation in Mathematica

 Voigt notation in Mathematica

In the computational mechanics software (Abaqus, Ansys, Comsol, etc), Voigt notation is always used to represent a symmetric tensor by reducing its order.

Now I would ask How can we get the Voigt Notation from second order tensor or fourth order tensor in a very efficient way in Mathematica.

e.g. second order tensor: Array[Subscript[a, ## & @@ Sort[{##}]] &, {6, 6}] // MatrixForm

PS: ‘(Manual) using hand writing’ is not a good way.

Reference Links:

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

FICO Score Planner: Helping People Meet Their FICO Score Goals

Functioning in the U.S. economy using cash only can be a challenge.  Simple daily transactions, such as making an online purchase, going out to eat, or even paying a bridge toll is a lot easier if you have a credit or debit card.

Most people know that having a higher credit score is a positive thing as it can help increase access to credit at more attractive rates.  Additionally, most people also understand that not paying your bills on time, carrying a high amount of credit debt, and opening up a lot of credit in a short period of time are behaviors that will likely have a negative impact on their credit scores.

What’s less intuitive is knowing what potential actions they could take to reach a target credit score goal by a target date. For example, if someone currently has a 695 FICO® Score 8 based on Experian data and want to increase that score to 725 in 6 months so they can apply for a new credit card, how would they know if that target score was even possible or what actions could be taken to potentially achieve it?

FICO® Score Planner is a new feature built by FICO scientists that enables an individual to set a target FICO® Score 8 goal and desired time duration to reach their goal.  These inputs along with an individual’s current FICO® Score 8 and credit report are analyzed by the FICO® Score Planner algorithm, which produces a set of potential actions consumers could take to help reach their target goal.  Consumers can then track their progress to their goal or modify their goals along their way.

Screen Shot 2018 07 17 at 1.42.01 PM FICO Score Planner: Helping People Meet Their FICO Score Goals

Ideal for people with an active credit profile, FICO® Score Planner helps take away some of the guess work many people face when trying to figure out potential actions they can take that may help to achieve their FICO Score goals.

Let’s block ads! (Why?)


7/19/18 Webinar: Next Generation Location and Data Analysis using Mapbox and Power BI

Join Charles Sterling and Sam Gehret as they walk through how Mapbox and Power BI can use location data to tell your story using next generation maps.


When: 7/19/18 10AM PST

If you are not familiar with Mapbox, it is the location data platform for mobile and web applications. They provide building blocks to add location features like maps, search, and navigation into any experience you create.

 7/19/18 Webinar: Next Generation Location and Data Analysis using Mapbox and Power BISam Gehret

Sam Gehret is a Solutions Engineer for BI and Data Viz at Mapbox. He currently manages the development and roadmap for the Mapbox custom visual for Power BI. Sam has over 7 years of Business Intelligence experience working in both product and sales at another large BI company. He holds a BA from Dartmouth College and is a graduate of the General Assembly Javascript bootcamp.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI