Tag Archives: Security

Top 5 Fraud & Security Posts: AI Meets AML (and Hackers)

One of the newer applications of artificial intelligence rose to the top of the Fraud & Security blog last year: anti-money laundering. Readers were also keenly interested in learning more about cybersecurity and ATM compromise trends.

Here were the top 5 posts of 2017 in the Fraud & Security category:

AI AML Top 5 Fraud & Security Posts: AI Meets AML (and Hackers)

As FICO began using AI to detect money laundering patterns, three of our business leaders blogged about why and how AI was being applied. Two of the top posts of the year related to this topic, with TJ Horan explaining why advanced analytics are needed. He noted three ways AI systems improve on traditional AML solutions:

  • More effective than rules-based systems: “As regulations become ever more demanding, the rules-based systems grow more and more complex with hundreds of rules driving know your customer (KYC) activity and Suspicious Activity Report (SAR) filing. As more rules get added, more and more cases get flagged for investigation while false positive rates keep increasing. Sophisticated criminals learn how to work around the transaction monitoring rules, avoiding known suspicious patterns of behavior.”
  • Powerful customer segmentation: “Traditional AML solutions resort to hard segmentation of customers based on the KYC data or sequence of behavior patterns. FICO’s approach recognizes that customers are too complex to be assigned to hard-and-fast segments, and need to be monitored continuously for anomalous behavior.”
  • Rank-ordering of AML alarms: “Using machine learning technology, FICO has also created an AML Threat Score that prioritizes investigation queues for SARs, leveraging behavioral analytics capability from Falcon Fraud Manager. This is a significant improvement, since finding true money-laundering behavior among tens of thousands of SARs is a true needle-in-the-haystack analogy.”

FICO Chief Analytics Officer Dr. Scott Zoldi followed up by explaining two techniques FICO has applied to AML, soft clustering and behavior-sorted lists. Of the first, he wrote:

“Using a generative model based on an unsupervised Bayesian learning technique, we take customers’ banking transactions in aggregate and generate “archetypes” of customer behavior. Each customer is a mixture of these archetypes and in real time these archetypes are adjusted with financial and non-financial activity of customers.  We find that using clustering techniques based on the customer’s archetypes allows customer clusters to be formed within their KYC hard segmentation.”

Clustering Top 5 Fraud & Security Posts: AI Meets AML (and Hackers)

Read the series on AI and AML

Cyber Risk Score 3 Top 5 Fraud & Security Posts: AI Meets AML (and Hackers)

The latest trend in cybersecurity is enterprise security ratings that benchmark a firm’s cybersecurity posture over time, and also against other organizations. Sarah Rutherford explained exactly what these scores measure.

“The cybersecurity posture of an organisation refers to its overall cybersecurity strength,” she wrote. “This expresses the relative security of your IT estate, particularly as it relates to the internet and its vulnerability to outside threats.

“Hardware and software, and how they are managed through policies, procedures or controls, are part of cybersecurity and can be referred to individually as such. Referring to any of these aspects individually is talking about cybersecurity, but to understand the likelihood of a breach a more holistic approach must be taken and an understanding of the cybersecurity posture developed. This includes not only the state of the IT infrastructure, but also the state of practices, processes, and human behaviours. These are harder to measure, but can be reliably inferred from observation.”

Read the full post

ATM Hacked Top 5 Fraud & Security Posts: AI Meets AML (and Hackers)

2017 saw a continued increase in compromised ATMs in the US. As TJ Horan reported:

  • The number of payment cards compromised at U.S. ATMs and merchants monitored rose 70 percent in 2016.
  • The number of hacked card readers at U.S. ATMs, restaurants and merchants rose 30 percent in 2016. This new data follows a 546 percent increase in compromised ATMs from 2014 to 2015.

TJ also provided tips for consumers using ATMs.

Read the full post

Dick Dastardly Top 5 Fraud & Security Posts: AI Meets AML (and Hackers)

With all the focus on data breaches, Sarah Rutherford posed this question, and provided the top three reasons:

  1. For financial gain
  2. To make a political or social point
  3. For the intellectual challenge

“The ‘why’ of cybercrime is complex,” she added. “In addition to the motivations already mentioned, hackers could also be motivated by revenge or wish to spy to gain commercial or political advantage. The different motivational factors for hacking can coincide; a hacker who is looking for an intellectual challenge may be happy to also use their interest to make money or advance their political agenda.”

Read the full post

Follow this blog for our 2018 insights into fraud, financial crime and cybersecurity.

Let’s block ads! (Why?)

FICO

Sync members from O365 Modern group to a mail-enabled security group

I’ve seen a few scenarios where Office 365 modern groups were depended on for security access, but when trying to use them within Power BI you will find they are not available. Power BI really relies on mail enabled security groups that are not the O365 modern groups.

So, what do you do? There are probably other approaches that you may have come up with, and I’d love to hear about those in the comments. One workaround I came up with was to use PowerShell to create a mail enabled security group through Exchange Online and then match the group members from an existing Office 365 Modern group. Then you can reference the new mail enabled group, by email address, within Power BI. These can then be used within apps, organizational content packs, and more.

For the full script, head over to GitHub.

How the script works

This script will first create a new distribution group within Exchange Online if it doesn’t already exist.

## Update the managedby and PrimarySmtpAddress addresses
## Managed by = owner of group
## these can be changed later in the Exchange Online Admin portal

New-DistributionGroup -Name $ newGroupName -Type “Security” -ManagedBy “asaxton@guyinacube.com” -PrimarySmtpAddress mygroup@guyinacube.com

After the new group is created, or if the group already exists, we will then get the members from both the old group (O365 Modern Group) and the new group (Mail-enabled security group).

$ oldGroupMembers = Get-AzureADGroupMember -ObjectId $ oldGroup.ObjectId -All $ true
$ newGroupMembers = Get-AzureADGroupMember -ObjectId $ newGroup.ObjectId -All $ true

Then we will loop through the old group members. First checking to see if the member is already in the group. If it isn’t, we add it. If it is, we just write a message indicating it already exists and move onto the next member.

## Add old members to new group
## Check to make sure the member doesn’t already exist.
Foreach ($ member in $ oldGroupMembers)
{
    if($ newGroupMembers -notcontains $ member)
    {
        Add-DistributionGroupMember -Identity $ newGroupName -Member $ member.UserPrincipalName
        $ message = “New group does not contain member – “
        $ message += $ member.UserPrincipalName
        Write-Output $ message
    }
    else
    {
        $ message = “New group contains member – “
        $ message += $ member.UserPrincipalName
        Write-Output $ message
    }
}

This can be re-run multiple times to make sure the Mail-enabled security group stays in sync with the O365 Modern group. So, if new users get added to the O365 Modern group, you can make sure they also get added to the Mail-enabled security group.

A couple of things that are missing from the script that you add.

  • Removal of users from the mail-enabled security group
  • Adding/removing users from the Office 365 Modern Group

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Mainframe Security Best Practices

You need not be a cybersecurity expert to know that we are living in the age of data breaches. Protecting against cyber attacks requires securing all of your infrastructure — including mainframes. Toward that end, keep reading for mainframe security best practices.

When it comes to building a cybersecurity strategy, it can be easy to leave mainframes out of the picture. In many organizations, mainframes serve as backend systems buried deep inside the IT infrastructure. Unlike commodity servers that host customer-facing Web applications, or workstations that provide portals into internal networks, mainframes are a behind-the-scenes type of infrastructure.

blog banner SotMF 2018 Mainframe Security Best Practices

In addition, it can be tempting to overlook mainframe security because mainframes are more complex and difficult to secure than other types of infrastructure. Most turnkey commercial security solutions were not designed with mainframes in mind, and the diversity of the mainframe ecosystem means that there is no one-size-fits-all strategy for security mainframes.

The fact is, however, that mainframes continue to host mission-critical workloads and data in a range of industries. Securing mainframe applications and data is just as crucial as protecting the rest of your infrastructure against breaches.

5 Steps to Mainframe Security

And although mainframe security may be difficult to achieve, it is certainly not impossible. Following are best practices for achieving mainframe security.

computer 2777254 960 720 600x Mainframe Security Best Practices

Access Control

You may (and certainly should!) have access control policies in place for the rest of your infrastructure.

Given the fact that most people in your organization probably never touch mainframes, however, it can be easy to assume that you don’t need access control for your mainframes. Most people likely don’t even know how to access mainframe data if they wanted to.

This doesn’t mean your mainframes can be excluded from access control, however. Locking down access credentials for mainframes shells and databases is just as important as restricting access to the rest of your infrastructure.

Security Reviews

Even if you have a strong set of mainframe security practices and policies in place, you should review them periodically to make sure they are up-to-date and continuing to meet your organization’s needs.

Yes, this takes time and forethought. It also requires you to have security experts on hand who understand the unique needs of mainframe systems. But periodic security reviews are essential if you want to catch problems before the bad guys do.

After all, a proactive security review is much less stressful than a post-breach post-mortem.

Real-Time Analytics

Given the types of data and workloads that mainframes handle, detecting a breach minutes or hours after it has occurred is often not enough to prevent major damage.

The compromise of credit card information, personal customer data and the like needs to be caught in real time.

This is why real-time data analytics and fraud detection are an important component of any mainframe security strategy.

bigstock real time analytics word abstr 170139830 600x Mainframe Security Best Practices

Workload Isolation

One of the great things about mainframes is that their massive compute power can easily be segmented into multiple distinct environments by using the z/OS virtualization feature.

Z/OS virtualization not only helps you to organize workloads more efficiently but also helps to improve security. By isolating different workloads from one another using virtualization, you make it harder for attackers to escalate a breach. If they are able to break into one environment, they don’t have instant access to the rest of your mainframe.

So, unless two workloads need to run inside the same environment, consider isolating them using a hypervisor.

Software Updates

Software security updates on commodity servers are easy enough to handle. The operating system takes care of them for you automatically — or at least tries. (Yes, some “Patch Tuesdays” go awry, but in general, server administrators don’t have to worry too much about security patches these days.)

Keeping mainframe software up-to-date may require more manual effort. Enabling automatic updates in z/OS is one way to reduce that effort, but you should still make it a habit to check periodically for updates in z/OS Explorer, and to review the updates that get installed.

The results from Syncsort’s annual State of the Mainframe Surveynot only show that mainframes are still the predominant platform for performing large-scale transaction processing on mission-critical applications, it uncovers the trend to put analytics in place for security and compliance, as well as operational intelligence.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Gaming companies outsmart DDoS attack with new software security solutions

 Gaming companies outsmart DDoS attack with new software security solutions

New releases in the online gaming industry are highly anticipated events. Millions of gamers anxiously waiting to leap onto a shiny new game service is an irresistible target for hackers—with bragging rights being the prize. But for the gaming companies, suffering a DDoS attack is a disaster with immediate loss of revenue, mitigation costs and long-term consequences for their brand. Fortunately, new approaches to security based on multi-dimensional analytics and traffic modeling using big data are changing how this game is played.

The DDoS danger

Global gaming companies build excitement with big, heavily-marketed release dates. This brings millions of players online at the exact same time. During these traffic surges, gaming companies also see a surge of distributed denial of service (DDoS) attacks. Being able to surgically shutting down the attacks without disrupting service is critical.

Successful DDoS attacks can have immediate revenue implications, but more importantly, they hurt their customer base—and even a small number of grumpy gamers can do a lot of damage to the brand online. Growing the player base is essential for having a healthy game launch, especially in the highly competitive gaming industry. So losing customers due to an inaccessible service or bad PR can have serious consequences for any game — just look at Diablo 3, which took years to recover from its self-inflicted “Error 37” fiasco.

Gaming companies generally operate worldwide, serving millions of users. To avoid latency, they distribute their platforms onto multiple region-based servers. DDOS attacks can attack all or some of these servers concurrently, or can focus the attack on different layers of the service to weaken it to the point of being unusable.

A multi-vector attack might, for instance, use hijacked Internet of Things (IoT) devices reprogrammed to participate in the attack as well as hundreds of cloud servers with 10 Gbps uplinks to launch a simultaneous TCP/IP attack, as occurred in last year’s infamous DYN attack.

The outdated defense

Hardware mitigation solutions were not designed for the cloud and IoT era and are, unfortunately, too simplistic to keep up with these types of sophisticated threats.

When gaming companies suffer these DDoS attacks, the current common defense is to backhaul all traffic suspected of being infected to a scrubbing center where racks of purpose-built mitigation machines clean it in a single pass through. Attack detection starts with a baseline measure for what constitutes “normal” and then looks for anomalies, such as sudden large spikes in traffic. The affected traffic is then re-directed and backhauled to the scrubbers.

There is nothing elegant about this approach; it is slow and it suffers from a lot of false positives, meaning the unnecessary backhauling of large amounts of uninfected traffic. The detection hardware lacks the raw compute power required to perform the additional analytics needed to separate out the false positives. And, as the scale of DDoS attacks escalates, these inefficiencies become increasingly costly to gaming companies, since the system has to spend resources fighting phantom attacks, instead of identifying and dealing with other attack vectors.

A more efficient solution

A more elegant and faster approach exists using software-based multi-dimensional analytics, making detection more precise. They combine real-time network telemetry with advanced network analytics and other data such as DNS and BGP (among others) to see down to the source of attack traffic in real time.

Multi-dimensional analytics provide visibility into cloud applications and services and can instantly identify where the traffic is originating, determining whether it is friend or foe. Additionally, big data approaches to traffic modeling can help compare a potential event to past attack profiles and be more precise about what degree of variability from ‘normal’ is OK.

Armed with this kind of analysis, it becomes possible to create simple, effective filters at the peering edge of the network for the zombie PCs, IoT devices and/or cloud servers that are carrying out the attack. The offending traffic doesn’t have to be sent to the scrubbers; it is simply blocked at the edge. And every vector of the attack can be identified, pinpointing the attack endpoints and allowing for surgically precise mitigation. The ability to identify the endpoints of the attack in real-time means that rapidly changing attack vectors can also be identified and counteracted as the attackers attempt to play cat and mouse with network security operations.

This is a high stakes game that is escalating with the spread of inexpensive, insecure cloud services (<10 GB) and IoT devices. DDoS botnets have evolved beyond infecting PCs and now use IoT devices and Linux servers in the cloud. This new arsenal of weapons is giving hackers a completely different level of power than they’ve had before.

Fortunately, software security solutions built around deep network analytics and big data techniques are also game changers. For those gaming companies that have employed them, they can meet the threats with confidence, for now, with the winning approach.

Naim Falandino is Chief Scientist at Nokia Deepfield with expertise in real-time analytics, machine learning, and information visualization.

Submit your PC game by December 14th for free testing and be entered to win an @intel​ CORE i9 PROCESSOR worth $ 1000, an iBUYPOWER​ Revolt 2 Pro Z370 worth $ 1750, or an ASRock​ X299.ENTER & WIN: Submit your PC game to the Intel Game Dev Program

Let’s block ads! (Why?)

Big Data – VentureBeat

Cybercrimes Now Force Rethinking Public Safety And Security

276248 276248 l srgb s gl Cybercrimes Now Force Rethinking Public Safety And Security

The Japanese culture has always shown a special reverence for its elderly. That’s why, in 1963, the government began a tradition of giving a silver dish, called a sakazuki, to each citizen who reached the age of 100 by Keiro no Hi (Respect for the Elders Day), which is celebrated on the third Monday of each September.

That first year, there were 153 recipients, according to The Japan Times. By 2016, the number had swelled to more than 65,000, and the dishes cost the already cash-strapped government more than US$ 2 million, Business Insider reports. Despite the country’s continued devotion to its seniors, the article continues, the government felt obliged to downgrade the finish of the dishes to silver plating to save money.

What tends to get lost in discussions about automation taking over jobs and Millennials taking over the workplace is the impact of increased longevity. In the future, people will need to be in the workforce much longer than they are today. Half of the people born in Japan today, for example, are predicted to live to 107, making their ancestors seem fragile, according to Lynda Gratton and Andrew Scott, professors at the London Business School and authors of The 100-Year Life: Living and Working in an Age of Longevity.

The End of the Three-Stage Career

Assuming that advances in healthcare continue, future generations in wealthier societies could be looking at careers lasting 65 or more years, rather than at the roughly 40 years for today’s 70-year-olds, write Gratton and Scott. The three-stage model of employment that dominates the global economy today—education, work, and retirement—will be blown out of the water.

It will be replaced by a new model in which people continually learn new skills and shed old ones. Consider that today’s most in-demand occupations and specialties did not exist 10 years ago, according to The Future of Jobs, a report from the World Economic Forum.

And the pace of change is only going to accelerate. Sixty-five percent of children entering primary school today will ultimately end up working in jobs that don’t yet exist, the report notes.

Our current educational systems are not equipped to cope with this degree of change. For example, roughly half of the subject knowledge acquired during the first year of a four-year technical degree, such as computer science, is outdated by the time students graduate, the report continues.

Skills That Transcend the Job Market

Instead of treating post-secondary education as a jumping-off point for a specific career path, we may see a switch to a shorter school career that focuses more on skills that transcend a constantly shifting job market. Today, some of these skills, such as complex problem solving and critical thinking, are taught mostly in the context of broader disciplines, such as math or the humanities.

Other competencies that will become critically important in the future are currently treated as if they come naturally or over time with maturity or experience. We receive little, if any, formal training, for example, in creativity and innovation, empathy, emotional intelligence, cross-cultural awareness, persuasion, active listening, and acceptance of change. (No wonder the self-help marketplace continues to thrive!)

These skills, which today are heaped together under the dismissive “soft” rubric, are going to harden up to become indispensable. They will become more important, thanks to artificial intelligence and machine learning, which will usher in an era of infinite information, rendering the concept of an expert in most of today’s job disciplines a quaint relic. As our ability to know more than those around us decreases, our need to be able to collaborate well (with both humans and machines) will help define our success in the future.

Individuals and organizations alike will have to learn how to become more flexible and ready to give up set-in-stone ideas about how businesses and careers are supposed to operate. Given the rapid advances in knowledge and attendant skills that the future will bring, we must be willing to say, repeatedly, that whatever we’ve learned to that point doesn’t apply anymore.

Careers will become more like life itself: a series of unpredictable, fluid experiences rather than a tightly scripted narrative. We need to think about the way forward and be more willing to accept change at the individual and organizational levels.

Rethink Employee Training

One way that organizations can help employees manage this shift is by rethinking training. Today, overworked and overwhelmed employees devote just 1% of their workweek to learning, according to a study by consultancy Bersin by Deloitte. Meanwhile, top business leaders such as Bill Gates and Nike founder Phil Knight spend about five hours a week reading, thinking, and experimenting, according to an article in Inc. magazine.

If organizations are to avoid high turnover costs in a world where the need for new skills is shifting constantly, they must give employees more time for learning and make training courses more relevant to the future needs of organizations and individuals, not just to their current needs.

The amount of learning required will vary by role. That’s why at SAP we’re creating learning personas for specific roles in the company and determining how many hours will be required for each. We’re also dividing up training hours into distinct topics:

  • Law: 10%. This is training required by law, such as training to prevent sexual harassment in the workplace.

  • Company: 20%. Company training includes internal policies and systems.

  • Business: 30%. Employees learn skills required for their current roles in their business units.

  • Future: 40%. This is internal, external, and employee-driven training to close critical skill gaps for jobs of the future.

In the future, we will always need to learn, grow, read, seek out knowledge and truth, and better ourselves with new skills. With the support of employers and educators, we will transform our hardwired fear of change into excitement for change.

We must be able to say to ourselves, “I’m excited to learn something new that I never thought I could do or that never seemed possible before.” D!

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Equifax announces Chief Security Officer and Chief Information Officer have left

 Equifax announces Chief Security Officer and Chief Information Officer have left

(Reuters) — Equifax said on Friday that it made changes in its top management as part of its review of a massive data breach, with two technology and security executives leaving the company “effective immediately.”

The credit-monitoring company announced the changes in a press release that gave its most detailed public response to date of the discovery of the data breach on July 29 and the actions it has since taken.

The statement came on a day when Equifax’s share price continued to slide following a week of relentless criticism over its response to the data breach,

Lawmakers, regulators and consumers have complained that Equifax’s response to the breach, which exposed sensitive data like Social Security numbers of up to 143 million people, had been slow, inadequate and confusing.

Equifax on Friday said that Susan Mauldin, chief security officer, and David Webb, chief information officer, were retiring.

The company named Mark Rohrwasser as interim chief information office and Russ Ayres as interim chief security officer, saying in its statement, “The personnel changes are effective immediately.”

Rohrwasser has led the company’s international IT operations, and Ayres was a vice president in the IT organization.

The company also confirmed that Mandiant, the threat intelligence arm of the cyber firm FireEye, has been brought on to help investigate the breach. It said Mandiant was brought in on Aug. 2 after Equifax’s security team initially observed “suspicious network traffic” on July 29.

The company has hired public relations companies DJE Holdings and McGinn and Company to manage its response to the hack, PR Week reported. Equifax and the two PR firms declined to comment on the report.

Equifax’s share prices has fallen by more than a third since the company disclosed the hack on Sept. 7. Shares shed 3.8 percent on Friday to close at $ 92.98.

U.S. Senator Elizabeth Warren, who has built a reputation as a fierce consumer champion, kicked off a new round of attacks on Equifax on Friday by introducing a bill along with 11 other senators to allow consumers to freeze their credit for free. A credit freeze prevents thieves from applying for a loan using another person’s information.

Warren also signaled in a letter to the Consumer Financial Protection Bureau, the agency she helped create in the wake of the 2007-2009 financial crisis, that it may require extra powers to ensure closer federal oversight of credit reporting agencies.

Warren also wrote letters to Equifax and rival credit monitoring agencies TransUnion and Experian, federal regulators and the Government Accountability Office to see if new federal legislation was needed to protect consumers.

Connecticut Attorney General George Jepsen and more than 30 others in a state group investigating the breach acknowledged that Equifax has agreed to give free credit monitoring to hack victims but pressed the company to stop collecting any money to monitor or freeze credit.

“Selling a fee-based product that competes with Equifax’s own free offer of credit monitoring services to victims of Equifax’s own data breach is unfair,” Jepsen said.

Also on Friday, the chairman and ranking member of the Senate subcommittee on Social Security urged Social Security Administration to consider nullifying its contract with Equifax and consider making the company ineligible for future government contracts.

The two senators, Republican Bill Cassidy and Democrat Sherrod Brown, said they were concerned that personal information maintained by the Social Security Administration may also be at risk because the agency worked with Equifax to build its E-Authentication security platform.

Equifax has reported that for 2016, state and federal governments accounted for 5 percent of its total revenue of $ 3.1 billion.

400,000 Britons affected

Equifax, which disclosed the breach more than a month after it learned of it on July 29, said at the time that thieves may have stolen the personal information of 143 million Americans in one of the largest hacks ever.

The problem is not restricted to the United States.

Equifax said on Friday that data on up to 400,000 Britons was stolen in the hack because it was stored in the United States. The data included names, email addresses and telephone numbers but not street addresses or financial data, Equifax said.

Canada’s privacy commissioner said on Friday that it has launched an investigation into the data breach. Equifax is still working to determine the number of Canadians affected, the Office of the Privacy Commissioner of Canada said in a statement.

Let’s block ads! (Why?)

Big Data – VentureBeat

Be Careful when Configuring Security Roles for Business Process Flows in Dynamics 365

Microsoft Dynamics 365 introduces a fancy new editor for Business Process Flows. Although it looks slick and provides a visual representation of your processes, there is a fundamental problem with the new designer with regards to Security Roles. To illustrate the issue, I will be comparing the designer in CRM 2016 with the designer in Dynamics 365.

Firstly, let’s look at what happens when a user tries to enable security roles for a Business Process Flow in CRM 2016. A light-box appears where the user can enable the flow for everyone, or select specific roles.

clip image002 thumb Be Careful when Configuring Security Roles for Business Process Flows in Dynamics 365

In comparison, Dynamics 365 does something very different. Instead of opening a lightbox, it opens the same screen that a user would see if they tried to create or modify existing security roles in the Security area of the system. In fact, it opens in a separate window and there is no OK button to apply the roles to the flow.

clip image004 thumb Be Careful when Configuring Security Roles for Business Process Flows in Dynamics 365

This issue is made worse if you have upgraded a Business Process Flow which is enabled for certain Security Roles from CRM 2016 to Dynamics 365. Let’s say you wanted to remove an existing role from the Business Process Flow. Again, there are no buttons available to link or unlink the role from the process flow, so users might think to use the Delete button instead. If you do this, it will delete the Security Role from the system entirely!

clip image006 thumb Be Careful when Configuring Security Roles for Business Process Flows in Dynamics 365

Be very careful when configuring Security Roles with Business Process Flows in Dynamics 365!

Let’s block ads! (Why?)

Magnetism Solutions Dynamics CRM Blog

Looking for HR – Or Any – Job Security? Try Analytics

Last August, a woman arrived at a Reno, Nevada, hospital and told the attending doctors that she had recently returned from an extended trip to India, where she had broken her right thighbone two years ago. The woman, who was in her 70s, had subsequently developed an infection in her thigh and hip for which she was hospitalized in India several times. The Reno doctors recognized that the infection was serious—and the visit to India, where antibiotic-resistant bacteria runs rampant, raised red flags.

When none of the 14 antibiotics the physicians used to treat the woman worked, they sent a sample of the bacterium to the U.S. Centers for Disease Control (CDC) for testing. The CDC confirmed the doctors’ worst fears: the woman had a class of microbe called carbapenem-resistant Enterobacteriaceae (CRE). Carbapenems are a powerful class of antibiotics used as last-resort treatment for multidrug-resistant infections. The CDC further found that, in this patient’s case, the pathogen was impervious to all 26 antibiotics approved by the U.S. Food and Drug Administration (FDA).

In other words, there was no cure.

This is just the latest alarming development signaling the end of the road for antibiotics as we know them. In September, the woman died from septic shock, in which an infection takes over and shuts down the body’s systems, according to the CDC’s Morbidity and Mortality Weekly Report.

Other antibiotic options, had they been available, might have saved the Nevada woman. But the solution to the larger problem won’t be a new drug. It will have to be an entirely new approach to the diagnosis of infectious disease, to the use of antibiotics, and to the monitoring of antimicrobial resistance (AMR)—all enabled by new technology.

sap Q217 digital double feature2 images2 Looking for HR – Or Any – Job Security? Try AnalyticsBut that new technology is not being implemented fast enough to prevent what former CDC director Tom Frieden has nicknamed nightmare bacteria. And the nightmare is becoming scarier by the year. A 2014 British study calculated that 700,000 people die globally each year because of AMR. By 2050, the global cost of antibiotic resistance could grow to 10 million deaths and US$ 100 trillion a year, according to a 2014 estimate. And the rate of AMR is growing exponentially, thanks to the speed with which humans serving as hosts for these nasty bugs can move among healthcare facilities—or countries. In the United States, for example, CRE had been seen only in North Carolina in 2000; today it’s nationwide.

Abuse and overuse of antibiotics in healthcare and livestock production have enabled bacteria to both mutate and acquire resistant genes from other organisms, resulting in truly pan-drug resistant organisms. As ever-more powerful superbugs continue to proliferate, we are potentially facing the deadliest and most costly human-made catastrophe in modern times.

“Without urgent, coordinated action by many stakeholders, the world is headed for a post-antibiotic era, in which common infections and minor injuries which have been treatable for decades can once again kill,” said Dr. Keiji Fukuda, assistant director-general for health security for the World Health Organization (WHO).

Even if new antibiotics could solve the problem, there are obstacles to their development. For one thing, antibiotics have complex molecular structures, which slows the discovery process. Further, they aren’t terribly lucrative for pharmaceutical manufacturers: public health concerns call for new antimicrobials to be financially accessible to patients and used conservatively precisely because of the AMR issue, which reduces the financial incentives to create new compounds. The last entirely new class of antibiotic was introduced 30 year ago. Finally, bacteria will develop resistance to new antibiotics as well if we don’t adopt new approaches to using them.

Technology can play the lead role in heading off this disaster. Vast amounts of data from multiple sources are required for better decision making at all points in the process, from tracking or predicting antibiotic-resistant disease outbreaks to speeding the potential discovery of new antibiotic compounds. However, microbes will quickly adapt and resist new medications, too, if we don’t also employ systems that help doctors diagnose and treat infection in a more targeted and judicious way.

Indeed, digital tools can help in all four actions that the CDC recommends for combating AMR: preventing infections and their spread, tracking resistance patterns, improving antibiotic use, and developing new diagnostics and treatment.

Meanwhile, individuals who understand both the complexities of AMR and the value of technologies like machine learning, human-computer interaction (HCI), and mobile applications are working to develop and advocate for solutions that could save millions of lives.

sap Q217 digital double feature2 images3 1024x572 Looking for HR – Or Any – Job Security? Try Analytics

Keeping an Eye Out for Outbreaks

Like others who are leading the fight against AMR, Dr. Steven Solomon has no illusions about the difficulty of the challenge. “It is the single most complex problem in all of medicine and public health—far outpacing the complexity and the difficulty of any other problem that we face,” says Solomon, who is a global health consultant and former director of the CDC’s Office of Antimicrobial Resistance.

Solomon wants to take the battle against AMR beyond the laboratory. In his view, surveillance—tracking and analyzing various data on AMR—is critical, particularly given how quickly and widely it spreads. But surveillance efforts are currently fraught with shortcomings. The available data is fragmented and often not comparable. Hospitals fail to collect the representative samples necessary for surveillance analytics, collecting data only on those patients who experience resistance and not on those who get better. Laboratories use a wide variety of testing methods, and reporting is not always consistent or complete.

Surveillance can serve as an early warning system. But weaknesses in these systems have caused public health officials to consistently underestimate the impact of AMR in loss of lives and financial costs. That’s why improving surveillance must be a top priority, says Solomon, who previously served as chair of the U.S. Federal Interagency Task Force on AMR and has been tracking the advance of AMR since he joined the U.S. Public Health Service in 1981.

A Collaborative Diagnosis

Ineffective surveillance has also contributed to huge growth in the use of antibiotics when they aren’t warranted. Strong patient demand and financial incentives for prescribing physicians are blamed for antibiotics abuse in China. India has become the largest consumer of antibiotics on the planet, in part because they are prescribed or sold for diarrheal diseases and upper respiratory infections for which they have limited value. And many countries allow individuals to purchase antibiotics over the counter, exacerbating misuse and overuse.

In the United States, antibiotics are improperly prescribed 50% of the time, according to CDC estimates. One study of adult patients visiting U.S. doctors to treat respiratory problems found that more than two-thirds of antibiotics were prescribed for conditions that were not infections at all or for infections caused by viruses—for which an antibiotic would do nothing. That’s 27 million courses of antibiotics wasted a year—just for respiratory problems—in the United States alone.

And even in countries where there are national guidelines for prescribing antibiotics, those guidelines aren’t always followed. A study published in medical journal Family Practice showed that Swedish doctors, both those trained in Sweden and those trained abroad, inconsistently followed rules for prescribing antibiotics.

Solomon strongly believes that, worldwide, doctors need to expand their use of technology in their offices or at the bedside to guide them through a more rational approach to antibiotic use. Doctors have traditionally been reluctant to adopt digital technologies, but Solomon thinks that the AMR crisis could change that. New digital tools could help doctors and hospitals integrate guidelines for optimal antibiotic prescribing into their everyday treatment routines.

“Human-computer interactions are critical, as the amount of information available on antibiotic resistance far exceeds the ability of humans to process it,” says Solomon. “It offers the possibility of greatly enhancing the utility of computer-assisted physician order entry (CPOE), combined with clinical decision support.” Healthcare facilities could embed relevant information and protocols at the point of care, guiding the physician through diagnosis and prescription and, as a byproduct, facilitating the collection and reporting of antibiotic use.

sap Q217 digital double feature2 images4 Looking for HR – Or Any – Job Security? Try Analytics

Cincinnati Children’s Hospital’s antibiotic stewardship division has deployed a software program that gathers information from electronic medical records, order entries, computerized laboratory and pathology reports, and more. The system measures baseline antimicrobial use, dosing, duration, costs, and use patterns. It also analyzes bacteria and trends in their susceptibilities and helps with clinical decision making and prescription choices. The goal, says Dr. David Haslam, who heads the program, is to decrease the use of “big gun” super antibiotics in favor of more targeted treatment.

While this approach is not yet widespread, there is consensus that incorporating such clinical-decision support into electronic health records will help improve quality of care, contain costs, and reduce overtreatment in healthcare overall—not just in AMR. A 2013 randomized clinical trial finds that doctors who used decision-support tools were significantly less likely to order antibiotics than those in the control group and prescribed 50% fewer broad-spectrum antibiotics.

Putting mobile devices into doctors’ hands could also help them accept decision support, believes Solomon. Last summer, Scotland’s National Health Service developed an antimicrobial companion app to give practitioners nationwide mobile access to clinical guidance, as well as an audit tool to support boards in gathering data for local and national use.

“The immediacy and the consistency of the input to physicians at the time of ordering antibiotics may significantly help address the problem of overprescribing in ways that less-immediate interventions have failed to do,” Solomon says. In addition, handheld devices with so-called lab-on-a-chip  technology could be used to test clinical specimens at the bedside and transmit the data across cellular or satellite networks in areas where infrastructure is more limited.

Artificial intelligence (AI) and machine learning can also become invaluable technology collaborators to help doctors more precisely diagnose and treat infection. In such a system, “the physician and the AI program are really ‘co-prescribing,’” says Solomon. “The AI can handle so much more information than the physician and make recommendations that can incorporate more input on the type of infection, the patient’s physiologic status and history, and resistance patterns of recent isolates in that ward, in that hospital, and in the community.”

Speed Is Everything

Growing bacteria in a dish has never appealed to Dr. James Davis, a computational biologist with joint appointments at Argonne National Laboratory and the University of Chicago Computation Institute. The first of a growing breed of computational biologists, Davis chose a PhD advisor in 2004 who was steeped in bioinformatics technology “because you could see that things were starting to change,” he says. He was one of the first in his microbiology department to submit a completely “dry” dissertation—that is, one that was all digital with nothing grown in a lab.

Upon graduation, Davis wanted to see if it was possible to predict whether an organism would be susceptible or resistant to a given antibiotic, leading him to explore the potential of machine learning to predict AMR.

sap Q217 digital double feature2 images5 Looking for HR – Or Any – Job Security? Try Analytics

As the availability of cheap computing power has gone up and the cost of genome sequencing has gone down, it has become possible to sequence a pathogen sample in order to detect its AMR resistance mechanisms. This could allow doctors to identify the nature of an infection in minutes instead of hours or days, says Davis.

Davis is part of a team creating a giant database of bacterial genomes with AMR metadata for the Pathosystems Resource Integration Center (PATRIC), funded by the U.S. National Institute of Allergy and Infectious Diseases to collect data on priority pathogens, such as tuberculosis and gonorrhea.

Because the current inability to identify microbes quickly is one of the biggest roadblocks to making an accurate diagnosis, the team’s work is critically important. The standard method for identifying drug resistance is to take a sample from a wound, blood, or urine and expose the resident bacteria to various antibiotics. If the bacterial colony continues to divide and thrive despite the presence of a normally effective drug, it indicates resistance. The process typically takes between 16 and 20 hours, itself an inordinate amount of time in matters of life and death. For certain strains of antibiotic-resistant tuberculosis, though, such testing can take a week. While physicians are waiting for test results, they often prescribe broad-spectrum antibiotics or make a best guess about what drug will work based on their knowledge of what’s happening in their hospital, “and in the meantime, you either get better,” says Davis, “or you don’t.”

At PATRIC, researchers are using machine-learning classifiers to identify regions of the genome involved in antibiotic resistance that could form the foundation for a “laboratory free” process for predicting resistance. Being able to identify the genetic mechanisms of AMR and predict the behavior of bacterial pathogens without petri dishes could inform clinical decision making and improve reaction time. Thus far, the researchers have developed machine-learning classifiers for identifying antibiotic resistance in Acinetobacter baumannii (a big player in hospital-acquired infection), methicillin-resistant Staphylococcus aureus (a.k.a. MRSA, a worldwide problem), and Streptococcus pneumoniae (a leading cause of bacterial meningitis), with accuracies ranging from 88% to 99%.

Houston Methodist Hospital, which uses the PATRIC database, is researching multidrug-resistant bacteria, specifically MRSA. Not only does resistance increase the cost of care, but people with MRSA are 64% more likely to die than people with a nonresistant form of the infection, according to WHO. Houston Methodist is investigating the molecular genetic causes of drug resistance in MRSA in order to identify new treatment approaches and help develop novel antimicrobial agents.

sap Q217 digital double feature2 images6 1024x572 Looking for HR – Or Any – Job Security? Try Analytics

The Hunt for a New Class of Antibiotics

There are antibiotic-resistant bacteria, and then there’s Clostridium difficile—a.k.a. C. difficile—a bacterium that attacks the intestines even in young and healthy patients in hospitals after the use of antibiotics.

It is because of C. difficile that Dr. L. Clifford McDonald jumped into the AMR fight. The epidemiologist was finishing his work analyzing the spread of SARS in Toronto hospitals in 2004 when he turned his attention to C. difficile, convinced that the bacteria would become more common and more deadly. He was right, and today he’s at the forefront of treating the infection and preventing the spread of AMR as senior advisor for science and integrity in the CDC’s Division of Healthcare Quality Promotion. “[AMR] is an area that we’re funding heavily…insofar as the CDC budget can fund anything heavily,” says McDonald, whose group has awarded $ 14 million in contracts for innovative anti-AMR approaches.

Developing new antibiotics is a major part of the AMR battle. The majority of new antibiotics developed in recent years have been variations of existing drug classes. It’s been three decades since the last new class of antibiotics was introduced. Less than 5% of venture capital in pharmaceutical R&D is focused on antimicrobial development. A 2008 study found that less than 10% of the 167 antibiotics in development at the time had a new “mechanism of action” to deal with multidrug resistance. “The low-hanging fruit [of antibiotic development] has been picked,” noted a WHO report.

Researchers will have to dig much deeper to develop novel medicines. Machine learning could help drug developers sort through much larger data sets and go about the capital-intensive drug development process in a more prescriptive fashion, synthesizing those molecules most likely to have an impact.

McDonald believes that it will become easier to find new antibiotics if we gain a better understanding of the communities of bacteria living in each of us—as many as 1,000 different types of microbes live in our intestines, for example. Disruption to those microbial communities—our “microbiome”—can herald AMR. McDonald says that Big Data and machine learning will be needed to unlock our microbiomes, and that’s where much of the medical community’s investment is going.

He predicts that within five years, hospitals will take fecal samples or skin swabs and sequence the microorganisms in them as a kind of pulse check on antibiotic resistance. “Just doing the bioinformatics to sort out what’s there and the types of antibiotic resistance that might be in that microbiome is a Big Data challenge,” McDonald says. “The only way to make sense of it, going forward, will be advanced analytic techniques, which will no doubt include machine learning.”

Reducing Resistance on the Farm

Bringing information closer to where it’s needed could also help reduce agriculture’s contribution to the antibiotic resistance problem. Antibiotics are widely given to livestock to promote growth or prevent disease. In the United States, more kilograms of antibiotics are administered to animals than to people, according to data from the FDA.

One company has developed a rapid, on-farm diagnostics tool to provide livestock producers with more accurate disease detection to make more informed management and treatment decisions, which it says has demonstrated a 47% to 59% reduction in antibiotic usage. Such systems, combined with pressure or regulations to reduce antibiotic use in meat production, could also help turn the AMR tide.

sap Q217 digital double feature2 images7 1024x572 Looking for HR – Or Any – Job Security? Try Analytics

Breaking Down Data Silos Is the First Step

Adding to the complexity of the fight against AMR is the structure and culture of the global healthcare system itself. Historically, healthcare has been a siloed industry, notorious for its scattered approach focused on transactions rather than healthy outcomes or the true value of treatment. There’s no definitive data on the impact of AMR worldwide; the best we can do is infer estimates from the information that does exist.

The biggest issue is the availability of good data to share through mobile solutions, to drive HCI clinical-decision support tools, and to feed supercomputers and machine-learning platforms. “We have a fragmented healthcare delivery system and therefore we have fragmented information. Getting these sources of data all into one place and then enabling them all to talk to each other has been problematic,” McDonald says.

Collecting, integrating, and sharing AMR-related data on a national and ultimately global scale will be necessary to better understand the issue. HCI and mobile tools can help doctors, hospitals, and public health authorities collect more information while advanced analytics, machine learning, and in-memory computing can enable them to analyze that data in close to real time. As a result, we’ll better understand patterns of resistance from the bedside to the community and up to national and international levels, says Solomon. The good news is that new technology capabilities like AI and new potential streams of data are coming online as an era of data sharing in healthcare is beginning to dawn, adds McDonald.

The ideal goal is a digitally enabled virtuous cycle of information and treatment that could save millions of dollars, lives, and perhaps even civilization if we can get there. D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.


About the Authors:

Dr. David Delaney is Chief Medical Officer for SAP.

Joseph Miles is Global Vice President, Life Sciences, for SAP.

Walt Ellenberger is Senior Director Business Development, Healthcare Transformation and Innovation, for SAP.

Saravana Chandran is Senior Director, Advanced Analytics, for SAP.

Stephanie Overby is an independent writer and editor focused on the intersection of business and technology.

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Troubleshooting Security Permissions? Don’t Forget Cascading Permissions

Troubleshooting Security Permissions Blog 300x225 Troubleshooting Security Permissions? Don’t Forget Cascading Permissions

If you’re a Microsoft Dynamics 365 admin, you may encounter issues where users report seeing specific records they’re certain they should not have access to. In these cases, there may not be an error message to start from, so to troubleshoot you’ll have to cover your bases from top to bottom.

Scenario: User with the Sales Person role reports they can read, write, and delete opportunities for which they are not the owner. They are certain that opportunities should only allow the user level access for all three privileges.

082317 1511 Troubleshoo1 Troubleshooting Security Permissions? Don’t Forget Cascading Permissions

Below are the common places to start when a user describes an issue such as the one reported above.

  1. Validate the user’s role. In this case, be certain that they do in fact only have user level access for the above privileges.
  2. Check the user’s team membership. Be certain that they are not part of special team that is giving them cumulative security access in addition to their base role.
  3. Validate that they are not part of an access team that is allowing them special access to the records.
  4. Check to see if the records in question have been shared with the user by another user or team in CRM.

Assume the above steps all check out, the user is not part of any teams with additional security permissions, the records have not been shared with them, and the role which is assigned to the user appears to be configured correctly. What next?

Sometimes cascading access from the parent record is overlooked. In our case, all users across the organization correctly have access to the CRM account entity. What we’re experiencing is the same level of access for all users to the opportunities, although looking at our access to the privileges (above) this should not be the case.

To validate, we’ll look at the relationship between account and opportunity. Out of the box we recognize that it is a one-to-many relationship (Account 1:N Opportunity). We can further validate the relationship behavior between the account and the opportunity. Here we’ll see that the behavior type is set to “Parental.” Because all our users can see all accounts, this is cascading down to the opportunity – despite our security role privileges telling us the user should see only their own opportunities.

To resolve this issue so that the security role may take full effect, we’ll update the relationship behavior between the account and the opportunity to “Configurable Cascading.” We’ll then set the assign, share, unshare, and reparent behaviors to “Cascade None.” Once these changes are published, all newly created opportunities will abide by the sales person security role which tells us a sales user can see only their own opportunities.

082317 1511 Troubleshoo2 Troubleshooting Security Permissions? Don’t Forget Cascading Permissions

Note: The visibility of opportunities created prior to this update will not be adjusted retroactively and all users will still have visibility to those historic opportunities.

There you have it – don’t forget about your cascading permissions! To receive our blog posts in your inbox, subscribe here!

Happy Dynamics 365’ing!

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Expert Interview (Part 3): Databricks’ Damji Discusses Security, Cloud and Notebooks

Syncsort’s Paige Roberts recently caught up with Jules Damji (@2twitme), the Spark Community Evangelist at Databricks, and they enjoyed a long conversation. In Part 3 of this four-part interview series, we’ll look more at the importance of security to Spark users, the overwhelming move of a lot of Big Data processing to the Cloud, and what the Databricks Platform brings to the table.

In case you missed it. In Part 1, we looked at the Apache Spark community. And, in the second post, we covered how Spark and Hadoop ecosystems are merging, which supports AI development.

Paige Roberts: So, we’ve talked a lot about the new single API for Spark, a single API for Datasets and DataFrames. I can build my application once; I can run it in streaming, I can run it in batch. It doesn’t even matter anymore. I can execute it on this engine now, and maybe next year, I can execute it on another engine, and I won’t have to rewrite it every time. You won’t have to rebuild if it uses the same API. That’s very similar to a Syncsort message we’ve been calling it Intelligent Execution , or Design Once, Deploy Anywhere.

Someone asked at Reynold Xin’s talk, “What do you do when you go from RDD to DataFrames?” The answer was, “Well, you have to re-write.”

[Both laugh]

Damji: Yeah. We can’t quite do it that far back.

blog banner landscape Expert Interview (Part 3): Databricks’ Damji Discusses Security, Cloud and Notebooks

Roberts: Still, that’s a very exciting and appealing model for a lot of folks, designing jobs once and having them execute wherever without re-designing. One of the things I see that Spark has as a distinct advantage over everybody else is just the level of the APIs. They are so much easier to use, they are so much more robust. Even more so with version 2.x. That seems to broaden your community, and make it easier for the community to add to the Spark ecosystem.

Damji: It does make a huge difference in community support and participation.

So, one thing we haven’t touched on much is about the Databricks business model. How does it work?

That’s a good question. Hardly anyone has effectively cracked the code on how to monetize only on open source technology. Probably one of the few companies that a lot of newer company’s model on is Red Hat.

blog damji quote no one has Expert Interview (Part 3): Databricks’ Damji Discusses Security, Cloud and Notebooks

Red Hat had a model of saying, “We are going to take Linux, which is open source, and we are gonna add proprietary and coveted enterprise features on it to make it available and suitable for an enterprise. Then we are going to charge for a subscription and provide support and services with it since Linux is our core competency. We have the brilliant hackers who can write your kind of device drivers and that sort of thing.”

We know it better than anyone else.

Exactly. We know it better than anyone, so one added value is a core competency. Another is enterprise kinds of security, which you won’t usually get in open source out of the box or from downloading from the repo. Kafka is going the same way with Confluent right?

So, I think that’s the trend. Whoever provides the best experience for Apache Spark on their particular platform, is going to win. Databricks provides the best Apache Spark platform, along with a Unified Analytics Platform that brings people, processes and infrastructure (or platforms) together. We provide the unified work space with notebooks, which data engineers and data scientists can collaborate on; we provide the best IO access for all your storage. We provide enterprise-grade security for both data at rest and data in motion. And we provide a fine-grained pool of serverless clusters.

As more and more data is going into the Cloud, people are more and more worried about sensitive data, and how do you protect that? So, security comes as part of this augmented offering.

blog damji quote financial institutions Expert Interview (Part 3): Databricks’ Damji Discusses Security, Cloud and Notebooks

They are! A lot of our customers are banks, insurance companies, and they’re really concerned with information security.

Financial institutions are a good example, and we have customers in that vertical. Financial institutions are warming up to the fact that Cloud is the future, and a good alternative. We have the same vision. So, we provide this unified analytics platform powered by Apache Spark with other stuff around it, which is Databricks specific. It gives you this comprehensive platform, which differentiates between computing and storage, because we don’t tell you what storage to use.

Related: Expert Interview: Livy and Spot are Apache Spark and Cyber Security Projects, Not the Names of Sean Anderson’s Dogs

Store it however you want.

Right.You can store it however you want. We’ll give you the ability to bring the data in quickly and process it fast and write it back quickly. All these different aspects of Databricks bring tremendous value to our customers: security, fast IO access, core competency of Apache Spark, and the integrated workspace of notebooks.

The data scientist and ETL engineers and business analysts can work collaboratively through the Databricks notebook platform. You bring the data in, you explore the data, you do your ETL, you write notebooks, you create pipelines. So, that’s the added features for our customers that come on top of open source. But underneath it is powered by Apache Spark.

Finally, you also get the ability to productionize your jobs using our job scheduler. And the ability to manage your entire infrastructure without worrying about.

blog Spark community Expert Interview (Part 3): Databricks’ Damji Discusses Security, Cloud and Notebooks

And as long as you keep making Apache Spark better and better, and the community keeps jumping in and loving it, then you guys have got a good future.

Yes! If you try our Community Edition, you’ll actually see those benefits. If you start using our Professional Edition, you begin to see more. Every time we create a new release, we release it for our customers as well as the community. They get that instantaneously.

That’s about as fast as it gets.

Don’t miss the final post of this four-part conversation with Jules Damji (Monday, August 14th), which features more about Spark and Databricks, and the advantages of Cloud data analysis.

Big Data is constantly evolving – are you playing by the new rules? Download our eBook The New Rules for Your Data Landscape today!

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog