Tag Archives: Guide

Beginner’s Guide: How to Perform Simple Data Validations on Records

Beginners Guide 300x225 Beginner’s Guide: How to Perform Simple Data Validations on Records

Today, we’ll walk users who are relatively new to Microsoft Dynamics 365 through some relatively simple data validation checks.

The Issue

You may come across a scenario where you want to perform a simple verification on a field before you create a record. For example, you may wish to perform a validation to check if the registered Due Date on a record is older than the record’s creation date. Often, validations are implemented with custom code. However, in situations where custom technical work may not be feasible, the solution outlined below can be implemented with ease.

The Solution

Business rules are a potential answer here since a validation can be applied here by comparing the Due Date against the value within the standard ‘Created On’ field. However, this field will not be populated during creation of a record which means Dynamics 365 doesn’t have a value to check against, rendering the sole use of business process flow alone futile.

121517 1951 BeginnersGu1 Beginner’s Guide: How to Perform Simple Data Validations on Records

A synchronous workflow on the other hand can be triggered to perform this validation before the record is created. The workflow can be stopped with the status of Cancelled if the required criteria is not met. Although this will solve the problem of validating the Due Date field before creating record, the business process error shown by the workflow is less user friendly than the error shown by the business rule.

Also, running synchronous workflows whenever the Due Date field is updated may result in significant processing load which is especially undesirable during peak periods. Therefore, limiting this workflow to only run prior to record creation is ideal while the business rule can be used for validation after record creation.

121517 1951 BeginnersGu3 Beginner’s Guide: How to Perform Simple Data Validations on Records

121517 1951 BeginnersGu2 Beginner’s Guide: How to Perform Simple Data Validations on Records

To conclude, the synchronous workflow and the business rule can be used in combination to overcome this issue. While the workflow will perform this validation effectively during record creation, the business rule will take over with a much more user-friendly error message once the record is created.

For more Dynamics 365 tips and tricks, be sure to keep checking our blog!

Happy Dynamics 365’ing!

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

A Quick Guide to Connections in Microsoft Dynamics 365

What are Connections?

Connections are an easy way to connect records without having to create a custom relationship between two entities. A connection can be used between records of the same type or of different types, e.g. a contact can be connected to another contact, or a contact can be connected to an account.

Example of When To Use Connections

You have 3 contacts Bob, Alice and Eve. These 3 contacts are attending an event “Catch-up”, it is commonly thought that placing a lookup field on the event form would be the way to represent this data. When using connections you do not need this lookup, instead you can have connection roles that each contact is associated to. You may have one connection role named ‘Host’ and another called ‘Participants’. Alice is hosting this event so you would establish her connection with the “Catch-up” record as ‘Host’ which can be displayed within a sub-grid on the Event form.

Why Use Connection Roles?

Connections are a great way of connecting multiple entities together regardless of their relationship. A lookup or sub-grid can only be associated with one entity type while connection roles can be related to any entity. Intersecting tables can get very large depending on the amount of data in each entity within many-to-many relationship. Using connections avoids this by creating four records in the connection table, Record1 and Record2 fields which are the primary keys of the connected records and Record1Role and Record2Role fields are the primary keys of the connection roles.
Another reason to use connections is that you can use advanced find to see the connections between records, e.g. you could do an advanced find query to see all ‘Attendees’ of more than one event, or we can look from the opposite side and see all events where a contact fulfils the ‘Attendee’ connection role.

How to Use Connection Roles

First off, a number of Connection Roles come out of the box (can be seen in the image below). You can access this list through Settings->Business Management->Connection Roles and Default Solution -> Connection Roles) and in addition to this you can create custom connection roles.

image thumb A Quick Guide to Connections in Microsoft Dynamics 365
image thumb 1 A Quick Guide to Connections in Microsoft Dynamics 365

Creating a Connection Role

When you create your connection role you may want it to be restricted to a single or many entities. This can be achieved in the create connection role screen:

image thumb 2 A Quick Guide to Connections in Microsoft Dynamics 365
Next click the ‘As this Role’ lookup, for the sake of example I’m going to create a new role. This role is going to be ‘Best Friends’ under the ‘Social’ category, the category is just the type of connection role this falls under, these can be configured under the ‘Category’ option set in the customization section.

image thumb 3 A Quick Guide to Connections in Microsoft Dynamics 365

Establishing the Connection

Navigate to the record you wish to make a Connection from, and click the ‘Connect’ button within the command bar.
  image thumb 4 A Quick Guide to Connections in Microsoft Dynamics 365

Click the ‘Name’ field in the Connection popup, scroll to the bottom and select ‘Look up more records’ and you’ll see this popup.
  image thumb 5 A Quick Guide to Connections in Microsoft Dynamics 365

Here is the connection form before saving:
  image thumb 6 A Quick Guide to Connections in Microsoft Dynamics 365

Adding a Connection Role Sub-Grid

Okay, so you now have your connection between two records… Success! Now, where do you view this connection you ask? One possibility is to add a new sub-grid to the form you’d like to view the connection on. In this case, I’ll add to the contact form as follows:
  image thumb 7 A Quick Guide to Connections in Microsoft Dynamics 365
Below is the final result of adding the sub-grid and connection. It’s as easy as that.
  image thumb 12 A Quick Guide to Connections in Microsoft Dynamics 365

Reciprocal Roles

I’ve talked about how to connect one record to another in a singular fashion but what about having a reciprocal role that displays a connection on both records? The classic example is having a connection between two contacts, e.g. one record being the Doctor the other connected record being the Patient. We establish this connection by navigating to the connection role that you’d like to match with another connection role and click ‘Add Existing’ on the sub-grid as shown in the below image.
  image thumb 9 A Quick Guide to Connections in Microsoft Dynamics 365
You connect two records in the exact same way, the difference is what is shown on each record when you connect them. Below is what is displayed in the Connection sub-grid for each record and their respective role.
  image thumb 10 A Quick Guide to Connections in Microsoft Dynamics 365

Limitations of Connections

  • There are a few limitations when using Connections:
  • To implement custom logic, you cannot use business rules or BPF’s, instead JavaScript must be implemented.
  • Rollup fields do not work over connections. By this I mean on the connection form you cannot rollup values from related records.
  • I hope this blog quickly summarizes connections for you and I hope you use them in the future!

Let’s block ads! (Why?)

Magnetism Solutions Dynamics CRM Blog

How to build a location aware #PowerApp – ‘Guide book Copenhagen’ – #opendatadk and #powerquery

October 25, 2017 / Erik Svensen

How to build a location aware #PowerApp – ‘Guide book Copenhagen’ – #opendatadk and #powerquery

In one of my latest projects we have used PowerApps to create a location aware selection of stores – and I wanted to share my experience about how to do this.

So, I found an open data set about attractions, restaurants, hotels and much more in Copenhagen.


In order to get the data into PowerApps – I created an Excel workbook and used PowerQuery to import the data in a Table.

 How to build a location aware #PowerApp – ‘Guide book Copenhagen’ – #opendatadk and #powerquery

The Query to create the table is quite simple and contains a little renaming and removal of unwanted columns, and I only imported the rows that has an Latitude and Longitude.


Source = Json.Document(Web.Contents(“https://portal.opendata.dk/dataset/44ecd686-5cb5-40f2-8e3f-b5e3607a55ef/resource/23425a7f-cc94-4e7e-8c73-acae88bf1333/download/guidedenmarkcphenjson.json”)),

#”Converted to Table” = Table.FromList(Source, Splitter.SplitByNothing(), null, null, ExtraValues.Error),

#”Expanded Column1″ = Table.ExpandRecordColumn(#”Converted to Table”, “Column1”, {“Id”, “Created”, “CreatedBy”, “Modified”, “ModifiedBy”, “Serialized”, “Online”, “Language”, “Name”, “CanonicalUrl”, “Owner”, “Category”, “MainCategory”, “Address”, “ContactInformation”, “Descriptions”, “Files”, “SocialMediaLinks”, “BookingLinks”, “ExternalLinks”, “MetaTags”, “RelatedProducts”, “Places”, “MediaChannels”, “Distances”, “Priority”, “Periods”, “PeriodsLink”, “PriceGroups”, “PriceGroupsLink”, “Routes”, “Rooms”, “Capacity”}, {“Id”, “Created”, “CreatedBy”, “Modified”, “ModifiedBy”, “Serialized”, “Online”, “Language”, “Name”, “CanonicalUrl”, “Owner”, “Category”, “MainCategory”, “Address”, “ContactInformation”, “Descriptions”, “Files”, “SocialMediaLinks”, “BookingLinks”, “ExternalLinks”, “MetaTags”, “RelatedProducts”, “Places”, “MediaChannels”, “Distances”, “Priority”, “Periods”, “PeriodsLink”, “PriceGroups”, “PriceGroupsLink”, “Routes”, “Rooms”, “Capacity”}),

#”Expanded Address” = Table.ExpandRecordColumn(#”Expanded Column1″, “Address”, {“AddressLine1”, “AddressLine2”, “PostalCode”, “City”, “Municipality”, “Region”, “GeoCoordinate”}, {“AddressLine1”, “AddressLine2”, “PostalCode”, “City”, “Municipality”, “Region”, “GeoCoordinate”}),

#”Expanded GeoCoordinate” = Table.ExpandRecordColumn(#”Expanded Address”, “GeoCoordinate”, {“Latitude”, “Longitude”}, {“Latitude”, “Longitude”}),

#”Filtered Rows” = Table.SelectRows(#”Expanded GeoCoordinate”, each ([Latitude] null and [Latitude] 0)),

#”Removed Columns” = Table.RemoveColumns(#”Filtered Rows”,{“Municipality”, “Region”, “ContactInformation”, “Descriptions”, “Files”, “SocialMediaLinks”, “BookingLinks”, “ExternalLinks”, “MetaTags”, “RelatedProducts”, “Places”, “MediaChannels”, “Distances”, “Priority”, “Periods”, “PeriodsLink”, “PriceGroups”, “PriceGroupsLink”, “Routes”, “Rooms”, “Capacity”}),

#”Expanded Category” = Table.ExpandRecordColumn(#”Removed Columns”, “Category”, {“Name”}, {“Name.1”}),

#”Removed Columns1″ = Table.RemoveColumns(#”Expanded Category”,{“Owner”}),

#”Expanded MainCategory” = Table.ExpandRecordColumn(#”Removed Columns1″, “MainCategory”, {“Name”}, {“Name.2”}),

#”Renamed Columns” = Table.RenameColumns(#”Expanded MainCategory”,{{“Name.2”, “MainCategory”}}),

#”Removed Columns2″ = Table.RemoveColumns(#”Renamed Columns”,{“AddressLine2”}),

#”Renamed Columns1″ = Table.RenameColumns(#”Removed Columns2″,{{“Name.1”, “Category”}}),

#”Removed Columns3″ = Table.RemoveColumns(#”Renamed Columns1″,{“Created”, “CreatedBy”, “Modified”, “ModifiedBy”, “Serialized”, “Online”, “Language”})


#”Removed Columns3″

The Excel file is then saved in Onedrive for business.


Lets build the app


I use the Web studio.

 How to build a location aware #PowerApp – ‘Guide book Copenhagen’ – #opendatadk and #powerquery


And select the Blank app with a Phone layout

On the canvas I click the Connect to data and create a new connection that connects to Onedrive for Business and pick the Excel file

 How to build a location aware #PowerApp – ‘Guide book Copenhagen’ – #opendatadk and #powerquery


So now we have a connection to data in our App

 How to build a location aware #PowerApp – ‘Guide book Copenhagen’ – #opendatadk and #powerquery


And I insert the following controls

 How to build a location aware #PowerApp – ‘Guide book Copenhagen’ – #opendatadk and #powerquery


The first two labels show the my location as latitude and longitude, and the I inserted a slider with a min and max of 0 to 2000 as the radius in meters around my location. The label above my slider is just to show the selected radius.

Now we can insert a drop down and set the Items to the data connection and the column Name in that data connection and see it works.

 How to build a location aware #PowerApp – ‘Guide book Copenhagen’ – #opendatadk and #powerquery


Now we must filter the items based on our current location. In order to do this, we must filter our items. This can be done using the FILTER function.

The formula the uses the slider to modify the radius around our location

Filter(CopenhagenGuide, Value(Latitude, “en-US”) >= Location.Latitude – Degrees(Slider1/1000/6371) && Value(Latitude, “en-US”) = Value(Location.Longitude, “en-US”) – Degrees(Slider1/1000/6371/Cos(Radians(Location.Latitude))) && Value(Longitude,”en-US”) <= Location.Longitude + Degrees(Slider1/1000/6371/Cos(Radians(Location.Latitude))) )

And if I now limit the radius to 173 meters you can see I have 4 places nearby

 How to build a location aware #PowerApp – ‘Guide book Copenhagen’ – #opendatadk and #powerquery

If you want to add a map as well highlighting the selected Attraction you can do that as well

 How to build a location aware #PowerApp – ‘Guide book Copenhagen’ – #opendatadk and #powerquery


You can find the information to do that here – https://powerapps.microsoft.com/en-us/blog/image-control-static-maps-api/


If you want a copy of the PowerApp file you are welcome to add a comment or ping me on twitter @donsvensen and I will send it to you.


Hope you can use this – Power ON!






Let’s block ads! (Why?)

Erik Svensen

Your Power BI guide to SQL PASS Summit 2017

Are you excited for the SQL PASS Summit 2017? We certainly are! The Power BI team is gearing up for another jam-packed conference at the Seattle Convention Center next week and we look forward to seeing you there.

For those who don’t know already, the SQL PASS Summit brings over 5000 data professionals together annually to connect, share, and learn about the latest technologies and services. There will be many sessions and activities for you to grow your knowledge and make a measured impact at work. It’s still not too late to register – find out more by visiting PASS Summit 2017.

For those attending this year, we’ve put together a quick guide to all the Power BI sessions and focus groups that you can look forward to:

Day One (Wednesday, November 1)

Microsoft BI – An integrated modern solution

Time: 10:45 AM – 12:00 PM
Speaker: Kamal Hathi
Room: 6E

Microsoft has an integrated BI story that spans across cloud and on-premise data. Join this session to get an overview of continued evolution of BI and the approach that’s shaping modern BI today and as we look to the future. Learn about the rapid pace of product innovation in Microsoft BI technologies and new BI capabilities for the enterprise. Learn how IT organizations can enable modern BI for their end users. Compelling technical demonstrations will showcase the potential of data and insights.

Deliver enterprise BI on Big Data

Time: 1:30 PM – 2:45 PM
Speakers: Bret Grinslade, Josh Caplan
Room: Tahoma 4 (TCC Level 3)

Learn how to deliver analytics at the speed of thought with Azure Analysis Services on top of a petabyte-scale SQL Data Warehouse, Azure Data Lake, or HDInsight implementation. This session will cover best practices for managing, processing and query accelerating at scale, implementing change management for data governance, and designing for performance and security. These advanced techniques will be demonstrated thorough an actual implementation including architecture, code, data flows and tips and tricks.

Unlock the power of your data by integrating analytics into your line-of-business apps

Time: 3:15 PM – 4:30 PM
Speakers: Lukasz Pawlowski, Ali Hamud, Matt Mason
Room: 3AB (WSCC)

Business users need data in their applications. Learn how Microsoft Power BI makes it easy to integrate world-class analytics into your packaged applications, line-of-business applications, and internal or external portals. See how quickly and deeply you can integrate Microsoft Power BI into your application workflows and unlock your data by seamlessly building Data Connectors for Power BI using the Power Query SDK.

Day Two (Thursday, November 2)

Keeping your on-premises data up to date with the on-premises data gateway

Time: 1:30 PM – 2:45 PM
Speakers: Miguel Llopis, Robert Bruckner
Room: 615 (WSCC)

The session will cover the on-premises gateways and how you can keep your data fresh by connecting to your on-premises data sources without the need to move the data. Query large data sets and benefit from your existing investments. The gateways provide the flexibility you need to meet individual needs, and the needs of your organization

Power BI Report Server: Self service BI & enterprise reporting on-premises

Time: 3:15 PM – 4:30 PM
Speaker: Chris Finlan
Room: 2AB (WSCC)

Love Power BI but need an on-premises solution today? In this session learn more about Power BI Report Server — self-service analytics and enterprise reporting, all in one on-premises solution. Design beautiful, interactive reports in Power BI Desktop, publish them to Power BI Report Server, and view and interact with them in your web browser or the Power BI app on your phone. And since Power BI Report Server includes the proven enterprise reporting capabilities of SQL Server Reporting Services, it can even run your existing Reporting Services reports too. Join us for an overview of Power BI Report Server and demos of its features in action.

Enterprise BI deployments and governance with the Power BI service

Time: 3:15 PM – 4:30 PM
Speaker: Adam Wilson
Room: 603 (WSCC)

Whether you’re planning an enterprise-wide reporting deployment or providing structure to self-service BI activities within your teams, Power BI has you covered. Learn about tools for developing, publishing, and managing your BI assets. We cover the data gateway, managing report lifecycle, publishing options, administration and governance controls, and end-user capabilities across devices and platforms.

Effective report authoring using Power BI Desktop

Time: 4:45 PM – 6:00 PM
Speakers: Miguel Llopis, Will Thompson
Room: Yakima 1 (TCC Level 1)

Power BI Desktop is a tool that allows Data Analysts, Data Scientists, Business Analysts, and BI Professionals to create interactive reports that can be published to Power BI. Join us during this session for a deep dive into the report authoring, data preparation, and data modeling in Power BI Desktop. Topics covered include R Integration, third party connectors, data mashups and modeling. Learn about various connectors in Power BI to get data from various data sources to get business insights faster.

Day Three (Friday, November 2)

Creating enterprise grade BI models with Azure Analysis Services or SQL Server Analysis Services

Time: 11:00 AM – 12:15 PM
Speaker: Christian Wade, Bret Grinslade
Room: Yakima 1 (TCC Level 1)

Microsoft Azure Analysis Services and SQL Server Analysis Services enable you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. Analysis Services enables consistent data across reports and users of Power BI. This session will reveal new features for large, enterprise models in the areas of performance, scalability, model management, and monitoring. Learn how to use these new features to deliver tabular models of unprecedented scale with easy data loading and simplified user consumption.

Power to the masses: BI, Apps and Bots for the rest of us

Time: 4:45 PM – 6:00 PM
Speaker: Marc Reguera, Olivier Matrat
Room: 6C (WSCC)

In this session, the Power BI Customer Advisory Team (CAT) will present their CATBot solution for helping Microsoft Sales representatives with technical questions on Microsoft Power BI. Based on Microsoft’s QnA Maker, itself built on top of the Microsoft Bot SDK, CATBot provides a fresh conversational end-point for engaging with the team, making it easy to disseminate their internal Knowledge Base (KB) at scale. Further, a mobile PowerApps solution was developed that literally puts the KB and bot at the fingertips of this global sales force, wherever they may be; both the bot and mobile application integrate natively with Microsoft Teams, the platform of choice for community engagement and management. Last but not least, telemetry from all these interactions is collected in Common Data Services (CDS) entities and used to trigger Flows and feed Power BI dashboards, providing both insights into the business, and levers to further curate the KB.

Focus Groups

The sessions listed above are promised to be packed with compelling product demos and exciting new feature announcements. At the same time, the Power BI team is also going to run three engaging focus groups. Join us for these and help us make the product that you love even better!

Power BI Focus Group: Enterprise BI deployments and governance with the Power BI service

Date: Wednesday, November 1
Time: 3:15 PM – 4:15 PM
Speakers: Kathryn Kitchen, Adam Wilson, Sirui Sun, Nikhil Gaekwad
Room: 205 (WSCC)

In this focus group we will discuss your goals, wishes, and pains around managing and governing large-scale Power BI deployments. Topics include: understanding and auditing usage of Power BI in your Enterprise, managing and deploying solutions (ALM, report lifecycle), controlling content distribution, and more. Please come prepared to participate in an active dialog, and provide us feedback on your experiences so we can improve Power BI for you.

Power BI Focus Group:Authoring reports using Power BI Desktop

Date: Wednesday, November 1
Time: 4:45 PM – 5:45 PM
Speakers: Kathryn Kitchen, Will Thompson, Amanda Cofsky
Room: 205 (WSCC)

This focus group is for those currently using Power BI desktop.  The Power BI team would like to hear from you about the challenges and pain points you encounter when using Power BI desktop to author reports, and what can be done to make it better.  Please come prepared to participate in an active dialog, and provide us feedback on your experiences so we can improve Power BI for you. 

Power BI Focus Group: Power BI Report Server and SQL Server Reporting Services

Date: Thursday, November 2
Time: 4:45 PM – 5:45 PM
Speakers: Riccardo Muti, Chris Finlan
Room: 205 (WSCC)

This focus group is for those currently using Power BI Report Server and/or SQL Server Reporting Services.  The Power BI team would like to hear from you about the challenges and pain points you encounter when using Power BI Report Server and/or SQL Server Reporting Services and what can be done to make it better. This is a discussion that requires audience participation.

SQL Clinic and Booths

Finally, the BI team at Microsoft will have a full presence at the in-demand SQL Clinic. If you have a technical question, a troubleshooting challenge, or want to find out about best practices running your BI workloads, the experts at the Clinic will have the answers for you. In addition, if you are passing by the expo hall and you have a product query or want us to dive into your BI solution, BI experts will be available at the booths in the Expo hall at the following times:

  • Wednesday, November 1 from 9:30 AM to 3:30 PM and 6:00 PM to 8:00 PM (Reception hall)
  • Thursday, November 2 from 9:30 AM to 3:30 PM
  • Friday, November 3 from 10:00 AM to 2:00 PM

We hope this will help you plan out your schedule and make the most of your Summit experience next week. See you there!

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Guide Dogs Nonprofit Provides Blind with a Path to Independence

Posted by Barney Beal, Content Director

In combining the world of breeding, behavioral training, veterinary care and fundraising, managing the largest guide dog operation in North America is no easy task. But for Guide Dogs for the Blind, the payoff is palpable.

To commemorate the National Guide Dog Month last September, NetSuite took a look at one of its innovative nonprofit customers.

Founded in 1942 as way to help wounded veterans returning from WWII, Guide Dogs for the Blind has since evolved into an operation that trains roughly 300 guide dog teams per year and now counts 2,200 active teams across the U.S. and Canada.

Photo%20Jul%2020,%209%2033%2045%20AM Guide Dogs Nonprofit Provides Blind with a Path to IndependenceGetting to that point, however, requires a wide range of disciplines. First of all, Guide Dogs for the Blind, provides all its services for free, without government funding. Karen Woon, Guide Dogs for the Blind’s vice president of marketing noted “providing those services for free can make a huge difference in the lives of the blind, ”Having a guide dog enhances mobility, independence, and social inclusion. Dogs are quite the ice breakers!”

Jason Mitschele, a graduate of the program, has been a guide dog handler for over 25 years.

JR Griff cropped%20(1) Guide Dogs Nonprofit Provides Blind with a Path to Independence“My guide dogs have provided me with confidence, speed, and perhaps more importantly a world of difference in how I see myself and relate to others,” he said. “Being blind can be isolating at times, but with a beautiful dog on your arm, there’s a social aspect to it. It’s a bit like being a celebrity.”

In fact, Mitschele credits one of his guide dogs for meeting his wife Amy. “We met at a fundraiser and when my beautiful black Lab popped his head up from under the table it was game on,” he said.

The services Guide Dogs for the Blind offers encompass; breeding of the dogs, Labrador Retrievers, Golden Retrievers, and Lab/Golden crosses which are chosen for specific health and temperament characteristics; travel expenses to the training campus in San Rafael, Calif. or Boring, Ore.; the two-week training; the guide dog itself; ongoing client support, and veterinary assistance if required. Not only that, but someone coming to one of Guide Dogs for the Blind’s campuses at age 30 is still going to need assistance at 40, 50 and 60 as well. That’s a lot of time in dog years and means a lengthy relationship between the nonprofit and the people it serves.

Ioana Gandrabur was born prematurely with retrolental fibroplasia (RFP) and has been using a guide dog for the past 10 years, helping her navigate the world of international travel as she journeys to concerts and competitions as a trained classical guitarist.

“One of the highlights in my travels with my current guide, Loyal, was returning to Germany 20150319 IMG 7691 Guide Dogs Nonprofit Provides Blind with a Path to Independencewhere I had spent so much time walking with a cane while dreaming about doing it with a guide dog,” Gandrabur said. “It really felt like a dream come true and sharing my new life style with old time friends was so enriching.”

The training also involves more than just putting any dog and any person together, Woon noted. An international business traveler has very different needs from his or her dog than a college student or a retired person whose days are filled going to the library or shopping.

For example, as a Federal Crown Prosecutor in Toronto prosecuting drug-related crimes and other complex cases, Mitschele takes his guide dog with him on investigations and into the courtroom, often resting near the jury box.

The process of creating a guide dog teams is not an easy one, either. Along with finding the right combination of temperaments and paces, Guide Dogs for the Blind also employs instructors and field representatives who travel to the homes of prospective clients to better understand their lifestyle and needs.

Relying on the Army of Awesome

Like many nonprofits, Guide Dogs for the Blind depends on volunteers, notably its “Army of Awesome’” its 2,000 puppy raising families that take in the dogs for the first eight weeks of their life and over 750 campus volunteers. There are, of course, also fundraising demands, with assistance coming from corporate sponsors, alumni and other donors, star athletes like NBA All Star Klay Thompson and Major League Baseball’s Brandon Crawford, as well as a capital campaign for GDB’s forthcoming puppy center.

But, like many nonprofits, Guide Dogs for the Blind relied on manual processes, Excel spreadsheets and an antiquated accounting system to manage the organization. That left it with little visibility into operations and an inability to correct course as the need arose.

By implementing NetSuite in June of 2016, the organization can create reports with the push of a button. In fact, with the help of NetSuite’s Pro Bono volunteers, Guide Dogs for the Blind issues its annual report largely out of a NetSuite script. The organization can also conduct what-if analysis for course corrections based on funding and accounting staff have been freed up from manual work and can now offer more strategic advice.

“With NetSuite, we found that our workplace giving revenue had grown significantly over the past few years without much marketing to support this program,” said Tom Horton, VP of Philanthropy, Guide Dogs for the Blind. “We therefore put more money toward marketing this particular program and pulled some marketing funds from less productive fundraising areas. The software has allowed us to use our dollars more productively.”

Learn more about how NetSuite is helping nonprofits manage their organization.

Posted on Mon, October 2, 2017
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

The Insider’s Guide To Improving Payments And Cash Flow: Evaluate And Select A Partner

The title of this post was inspired by the 1996 documentary “When We Were Kings,” about the heavyweight fight of 1974 between two boxing legends, Muhammad Ali and George Foreman. In the not-so-distant future, it will also be a fitting phrase for many in the banking and insurance industries.

Readers may ask why I am talking about banking and insurance in such doomsday terms. My bleak forecast does not stem from the notion behind the common fintech (financial technology) and insurtech (insurance technology) industry pitch that they will change their respective industries with innovation and better customer experiences, although I firmly believe that some of the startups will cause significant pain to the incumbents and will indeed change their respective industries. One day, some of the existing and as-yet-unlaunched fintech and insurtech companies will also become incumbents that other startups aim to disrupt.

The real threat to the financial industry will come from a radical approach to penetrate the financial market—an approach that I believe has not yet been addressed or even conceived by the competition. The emphasis is clearly on “yet.”

What is this new concept? It is simply this: offering financial services at or below cost. I have mooted this idea at many think tank events, and I thought I should write it down to share it more broadly. It is, and should be, a terrifying thought for many, and I strongly believe this approach will be implemented in the near future. It will bring many of the incumbents to their knees, unless they prepare for what is to come by investing in technology and adapting radical business models.

People talk about the limited impact of fintech and insurtech on the incumbent business model. I must agree that at this point many startups have little influence, if you look only at the customers they have taken away from incumbents. What the startups are already doing, however, is forcing many incumbents to lower their fees to better match what the smaller players offer to their clients.

Moreover, startups have also changed customers’ expectations of the user experience. Startups will also use artificial intelligence and machine learning to compete against the established financial players that have more resources—such as money, data and clients—at hand to compete. There is no way around investing in AI and machine learning to compete successfully against tech-savvy competitors. Many startups and large companies already use machine learning algorithms to build better credit risk models, predict bad loans, detect fraud, anticipate financial market behavior, improve customer relationship management, and provide more customized services to their clients. Arguably, the biggest effect of startups is that they continuously put pressure on incumbent profit margins. Startups will continue to try to change the status quo because they smell blood in the incumbent water.

The real and biggest threat to incumbents will likely originate from tech giants, such as Amazon, Apple and Facebook, and other big non-tech companies that have large customer and employee bases. These organizations will use their customers and employees to sell banking and insurance solutions, and the big financial institutions will become at best dumb pipes. The technical approaches to doing business within the fintech and insurtech industries may provide some of the tools tech giants and other large companies need to execute this strategy.

I know some readers will say that regulators will stop any attempt by non-traditional players to provide many banking and insurance services. However, I do not think regulators can or will stop the new competitors, because these companies will either obtain the necessary licenses to operate or have a bank or insurer provide third-party financial services to them. This strategy is not unlike the way some fintech challenger banks use the licenses of an existing bank to operate.

Why should we expect this scenario of financial industry disruption to happen? In our case, we all seem to agree that the tech giants are the ones to fear because of the Big Data platform and technology knowledge they possess. In addition, tech giants have several advantages, such as the trust factor and the constant interaction with satisfied customers. Furthermore, studies have shown that millennials would prefer to bank with tech giants such as Amazon, Facebook, or Google than with the existing banking players. And last but not least are the tech giants and startups that keep setting the bar higher for exceptional customer experience (for instance Apple’s simplicity or Amazon’s instant gratification) and shape the client behavior and expectations, not the incumbents.

All that speaks to tech giants’ favorable circumstances as serious competitors that are not yet ready to come in at full speed and hit the financial industry broadly, but it does not point to the need to fear an extreme disruption as I projected. I do not believe we will see those tech giants providing whole-spectrum financial services anytime soon, but they have the potential to offer services in certain segments, such as providing payment, lending, or insurance options for their customers and employees.

What is terrifying to imagine is a situation in which tech giants or other big companies provide financial service solutions at or below production costs. No, that is not a typo; I mean providing financial services for nothing—for free.

If we take this scenario to its extreme—that is, selling banking or insurance services for nothing (yes, for zero pounds, euros, dollars, or renminbi)—then we have a situation in which financial institutions in their present forms will die or be reduced to shadows of their current selves.

That can and will happen, and here’s why: Large companies could do exactly that—sell at or below cost—to win or keep customers. The new competitors would not need to earn money and could even afford to lose money in offering financial solutions if these features entice customers and new potential clients to use the companies’ core offerings. Remember that Facebook, for instance, earns the biggest portion of their profits through advertising because they have created a great platform through which people love to interact. Financial solutions would be just another great offering (especially if they are offered for free) to entice many people to join the tech giants’ ecosystems.

Alternatively, car companies such as GM could provide their employees and customers with very cheap or no-cost (no cost to customers, at cost for the company) banking or insurance solutions. Don’t forget that banking and insurance solutions can be provided at very little cost as white-label services from third parties that already have all the necessary licenses, technology and infrastructure.

All is not lost for banks and insurers, but it will be very hard for them to compete against savvy tech giants on their technological home turf. The financial industry must think fast to find ways to compete before their business oxygen runs out.

One solution that banks and insurers should pursue aggressively is to embrace the fintech and insurtech industries for their innovative business spirit and fast, direct execution approach to new ideas. That means financial institutions should buy what they can or partner with startups to make up for all the shortcomings that legacy brings. Size and regulation will not be enough to protect incumbent financial institutions against new competitors, as we have seen in many other industries.

Another idea might be for financial institutions to place advertisements on their websites or apps to compensate for loss of profit margins. I do not think this is the only solution, but financial institutions must innovate beyond their core areas of expertise and standard industry practices. Why do you think Amazon, Uber, and Airbnb have been so successful at disrupting their industries? Because they thought and acted as if they had nothing to lose and everything to gain.

The “at or below cost” approach to financial service solutions is not a far-fetched scenario for tech giants and other companies that are trying to find new ways to attract and keep clients. The banking and insurance industries must at least get very comfortable with the idea that low-cost or free financial services are coming.

A tsunami is often unnoticed in the open sea, but once it approaches the shore, it causes the sea to rise in a massive, devastating wave. The financial industry needs to determine if the threat by tech giants and non-tech companies is a small wave or a tsunami and prepare accordingly. My recommendation to all financial institutions is this: You’d better prepare for a tsunami, even if all you see is a small wave on the horizon.

Read more in my new white paper “Machine Learning in Financial Services: Changing the Rules of the Game.”

SAP Machine Learning Banner 728x90 V2 The Insider’s Guide To Improving Payments And Cash Flow: Evaluate And Select A Partner


Let’s block ads! (Why?)

Digitalist Magazine

UPDATED: Big Data Warehousing Must See Guide for Oracle OpenWorld 2017

 UPDATED: Big Data Warehousing Must See Guide for Oracle OpenWorld 2017 ** NEW ** Chapter 5

 UPDATED: Big Data Warehousing Must See Guide for Oracle OpenWorld 2017

*** UPDATED *** Must-See Guide now available as PDF and via Apple iBooks Store

This updated version now contains details of all the most important hands-on labs AND a day-by-day calendar. This means that our comprehensive guide now covers absolutely everything you need to know about this year’s Oracle OpenWorld conference. Now, when you arrive at Moscone Conference Center you are ready to get the absolute most out of this amazing conference.

The updated, and still completely free, big data warehousing Must-See guide for OpenWorld 2017 is now available for download from the Apple iBooks Store – click hereand in PDF format – click here.

Just so you know…this guide contains the following information:

Chapter 1

 – Introduction to the must-see guide. 

Chapter 2

 – A guide to the key the highlights from last year’s conference so you can relive the experience or see what you missed. Catch the most important highlights from last year’s OpenWorld conference with our on demand video service which covers all the major keynote sessions. Sit back and enjoy the highlights. The second section explains why you need to attend this year’s conference and how to justify it to your company. 

Chapter 3

- Full list of Oracle Product Management and Development presenters who will be at this year’s OpenWorld. Links to all their social media sites are included alongside each profile. Read on to find out about the key people who can help you and your teams build the FUTURE using Oracle’s Data Warehouse and Big Data technologies. 

Chapter 4

 – List of the “must-see” sessions

and hands-on labs

at this year’s OpenWorld by category. It includes all the sessions and hands-on labs by the Oracle Product Management and Development teams along with key customer sessions. Read on for the list of the best, most innovative sessions at Oracle OpenWorld 2017. 

Chapter 5

 – Day-by-Day “must-see” guide. It includes all the sessions and hands-on labs by the Oracle Product Management and Development teams along with key customer sessions. Read on for the list of the best, most innovative sessions at Oracle OpenWorld 2017. 

Chapter 6

 – Details of all the links you need to keep up to date on Oracle’s strategy and products for Data Warehousing and Big Data. This covers all our websites, blogs and social media pages. 

Chapter 7  

Details of our exclusive web application for smartphones and tablets provides you with a complete guide to everything related to data warehousing and big data at OpenWorld 2017. 

Chapter 8

 – Information to help you find your way around the area surrounding the Moscone Conference Center this section includes some helpful maps. 

Let me know if you have any comments. Enjoy and see you in San Francisco.

Let’s block ads! (Why?)

Oracle Blogs | Oracle The Data Warehouse Insider Blog

The Powerful Guide to SEO for Startups and Small Businesses

blog title seo laptop splash 351x200 The Powerful Guide to SEO for Startups and Small Businesses

Ensuring your site is optimized for search engines will include a few additional tasks, which involve research and discovery. The center of most SEO revolves around keywords, that is, the specific words your prospects and current customers will use to find a website like yours.

Keyword research is the process SEO professionals use to discover a full list of search terms that people enter into search engines while looking for information on a particular topic. Keywords can be simple or complex, depending on your industry and customer needs. These keywords are then used to achieve better rankings in search engines. How? Through on-page optimization. Read our helpful article, A Keyword Primer: Finding and Using Keyword Effectively, to get started on keyword research.

On-page optimization is the implementation piece of the puzzle, after you’ve effectively researched keywords. It’s not enough to simply discover the terms you want your pages to rank for; next you must use those keywords throughout your pages ‒ in the right places.

On-page refers to both the content and HTML source code of a page, which can be optimized for search engines. Seed keywords throughout the page in the following locations in order to properly optimize a page:

  • Meta title
  • Meta description
  • Throughout copy
  • Image alt tags
  • Internal links & external links
  • H1-H5/heading tags
  • URLs
  • LSI Keywords (synonyms that Google users to determine a pages topical relevancy)

Once your pages are optimized, it’s time to sit back and monitor your results. Pages that are well optimized are more likely to rank for the keywords which were intentionally placed on the page. Tracking your progress will help as an indicator of your success and tell you whether or not more optimization is needed.


An additional way you can optimize your startup’s website is through offsite factors. This type of promotion is a way to get your website mentioned on other websites and get it in front of your audience ‒ enabling you to make more money online.

Off-page optimization is a set of techniques you can use to increase the search engine rankings of your website; it’s the act of optimizing your brand’s online and offline presence. A huge part of off-page SEO are backlinks, which are incoming hyperlinks from one page to another website.

The number of backlinks a given website has is a pretty accurate indicator of how popular, important, or authoritative it is. External backlinks influence a search engine to understand what a page and website is about and can help improve rankings for specific keywords. While backlinks are an ever-decreasing factor in SEO, they still hold a large importance for search engines to determine where a site should be placed.

The goal of off-page SEO is to accumulate as many positive signals as possible for your brand, in a spam-free way. Off-page link building can be achieved in a number of ways, including the following:

  • getting mentions of your brand linked back to your website;
  • claiming local profiles for your brand and location;
  • using social media and adding your website URL to your active social profiles;
  • achieving press on your local news website or industry blogs, with a link back to your site;
  • partnership and portfolio pages mentioning your website;
  • backlinks pointing to valuable content such as case studies, research reports, white papers and free guides; and
  • so much more!

If you’re interested in reading more about link building, check out our article Part 2: Link Building and SEO Strategies to Resurrect in 2017.

Let’s block ads! (Why?)

Act-On Blog

Big Data 101: Dummy’s Guide to Batch vs. Streaming Data

Are you trying to understand Big Data and data analytics, but are confused by the difference between stream processing and batch data processing? If so, this article’s for you!

Batch Processing vs. Stream Processing

The distinction between batch processing and stream processing is one of the most fundamental principles within the Big Data world.

There is no official definition of these two terms, but when most people use them, they mean the following:

  • Under the batch processing model, a set of data is collected over time, then fed into an analytics system. In other words, you collect a batch of information, then send it in for processing.
  • Under the streaming model, data is fed into analytics tools piece-by-piece. The processing is usually done in real time.

Those are the basic definitions. To illustrate the concept better, let’s look at the reasons why you’d use batch processing or streaming, and examples of use cases for each one.

blog banner CDC webcast Big Data 101: Dummy’s Guide to Batch vs. Streaming Data

Batch Processing Purposes and Use Cases

Batch processing is most often used when dealing with very large amounts of data, and/or when data sources are legacy systems that are not capable of delivering data in streams.

Data generated on mainframes is a good example of data that, by default, is processed in batch form. Accessing and integrating mainframe data into modern analytics environments takes time, which makes streaming unfeasible to turn it into streaming data in most cases.

blog cookies batch stream processing Big Data 101: Dummy’s Guide to Batch vs. Streaming Data

Batch processing works well in situations where you don’t need real-time analytics results, and when it is more important to process large volumes of information than it is to get fast analytics results (although data streams can involve “big” data, too – batch processing is not a strict requirement for working with large amounts of data).

Stream Processing Purposes and Use Cases

Stream processing is key if you want analytics results in real time. By building data streams, you can feed data into analytics tools as soon as it is generated and get near-instant analytics results using platforms like Spark Streaming.

Stream processing is useful for tasks like fraud detection. If you stream-process transaction data, you can detect anomalies that signal fraud in real time, then stop fraudulent transactions before they are completed.

blog stream processing Big Data 101: Dummy’s Guide to Batch vs. Streaming Data

Turning Batch Data into Streaming Data

As noted, the nature of your data sources plays a big role in defining whether the data is suited for batch or streaming processing.

That doesn’t mean, however, that there’s nothing you can do to turn batch data into streaming data to take advantage of real-time analytics. If you’re working with legacy data sources like mainframes, you can use a tool like DMX-h to automate the data access and integration process and turn your mainframe batch data into streaming data.

This can be very useful because by setting up streaming, you can do things with your data that would not be possible using streams. You can obtain faster results and react to problems or opportunities before you lose the ability to leverage results from them.

To learn more about how Syncsort’s data tools can help you make the most of your data – and develop an agile data management strategydownload our new eBook: The New Rules for Your Data Landscape.

 Big Data 101: Dummy’s Guide to Batch vs. Streaming Data

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Data Quality Study Guide – A Review of Use Cases & Trends

Our summer school series continues with today’s fully loaded study session. Have you been taking note of all the use cases and current trends for data quality? Maybe now is a good time for a review!

Data Quality Saves You Money

A big reason to pay attention to data quality is that it can save you money. First and foremost, it can help you maximize the return on your Big Data investments. And there are additional cost-related benefits (areas that we will discuss below) to help you save even more.

blog intl moneys Data Quality Study Guide – A Review of Use Cases & Trends

It Builds Trust

Business leaders rely on Big Data analytics to make informed decisions. But according to figures presented at the recent Gartner Data and Analytics Summit, C-Level executives believe that 33% of their data is inaccurate. Ensuring quality data can help organizations trust the data.

And further, customers can trust businesses who are confident in their data. If your data is inaccurate, inconsistent or otherwise of low quality, you risk misunderstanding your customers and doing things that undermine their trust in you.

It appears there is an abundance of data, but a scarcity of trust, and the need for data literacy. It’s important to understand what your data MEANS to your organization. Defining data’s value wedge may be key to developing confidence in your enterprise data.

blog banner ASG webcast 2 Data Quality Study Guide – A Review of Use Cases & Trends

For more information, watch this educational webcast, hosted by ASG and Trillium Software, which explores the importance – and challenge – of determining what data MEANS to your organization, as well as solutions to empower both your technical (IS) and business users (DOES) to collaborate in an efficient, zero-gap-lineage user interface.

Data Quality’s Link to Data Governance

Data quality is essential for data governance because ensuring data quality is the only way to be certain that your data governance policies are consistently followed and enforced.

During her Enterprise Data World presentation, Laura Sebastian-Coleman, the Data Quality Center of Excellence Lead for Cigna, noted specifically that data quality depends on fitness for purpose, representational effectiveness and data knowledge. And, without this knowledge, which depends on the data context, our data lakes or even our data warehouses are doomed to become “data graveyards.”

At this year’s Data Governance and Information Quality Conference (DGIQ), our own Keith Kohl lead the session about how data governance and data quality are intrinsically linked, and as the strategic importance of data grows in an organization, the intersection of these practices grows in importance, too.

Data Quality and Your Customers

Engaging your customers is vital to driving your business. Data quality can help you improve your customer records by verifying and enriching the information you already have. And beyond contact info, you can manage customer interaction by storing additional customer preferences such as time of day they visit your site and which content topics and type they are most interested in.

The more customer information you have, the better you can understand your customers and achieve “Customer 360,” or full-view of your customer. But you need to be aware that more data means more complexity – creating a data integration paradox.

blog eBook Customer360 Data Quality Study Guide – A Review of Use Cases & Trends

For a more detailed overview of the different sources of this data, which data points are critical in obtaining, and tips for customer 360 success, download our eBook Getting Closer to Your Customers in a Big Data World.

Its Role in Cyber Security

You may be aware of all the ways you can leverage Big Data to detect fraud, but maybe you’re wondering how data quality can fight security breaches?

Think about it. If the machine data that your intrusion-detection tools collect about your software environments is filled with incomplete or inaccurate information, then you cannot expect your security tools to effectively detect dangerous threats.

Keep in mind, too, that when it comes to fraud detection, real-time results are key. By extension, your data quality tools covering fraud analysis data will also need to be work in real time.

Additional Data Quality Trends

Of course, we’re always thinking about what’s next for data quality. In March, Syncsort’s CEO Josh Rogers was interviewed on theCUBE, where he discussed his vision for its future.

One additional area of interest that’s gaining momentum is machine learning. While machine learning may seem like a “silver bullet,” because of the technologies it enables for us today, it’s important to understand that without high-quality data on which to operate, it is less magical.

Download theGartner Magic Quadrant Report to learn how leading solutions including Trillium can help you achieve your long-term data quality objectives.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog