Dr. Dre Is Making A Movie About Legendary Soul Singer Marvin Gaye

DrDreMarvinGaye Dr. Dre Is Making A Movie About Legendary Soul Singer Marvin Gaye

Dr. Dre is working on a movie about the late singer Marvin Gaye. The rapper, whose film credits include 2015’s “Straight Outta Compton,” on which he served as producer and inspiration for the film, is in the early stages of getting the project off the ground. And rights to use Gaye’s music have been secured, according to sources. Sony/ATV Music Publishing is home to Gaye’s songwriting credits.

There have been multiple attempts to give Gaye the biopic treatment. The soul singer behind such hits as “What’s Going On,” “Sexual Healing,” and “Let’s Get It On” became one of Motown’s most successful artists before he was fatally shot in 1984 at age 44 by his father following a family dispute in their Los Angeles home. F. Gary Gray, Cameron Crowe, James Gandolfini, Scott Rudin, and actors including Jesse L. Martin and Lenny Kravitz all have tried to bring Gaye’s story to the screen, but until now, none had been authorized by Gaye’s family.

The most recent project to get the sign off from the Gaye estate was with Jamie Foxx who landed the rights for a limited series in 2016 but that project has seen no movement in the years since.

As for Dre, whose real name is Andre Young, the hip-hop legend has dabbled in feature projects both in front of the camera, in films like “Training Day” and “Car Wash,” as well as producing. His most recent film credit was “Straight Outta Compton,” which chronicled the rise of his hip-hop group N.W.A and was produced by Dre and fellow N.W.A member Ice Cube. The pic was a massive hit and also scored an Oscar nomination for original screenplay.

In a curious twist of fate, Dre is represented by King, Holmes, Paterno & Soriano, the same legal firm which defended Robin Thicke and Pharrell Williams against the Gaye estate in the closely-monitored “Blurred Lines” trial in 2015. The verdict found in favor of the Gaye family, which contended that “Blurred Lines” infringed on Gaye’s 1977 hit “Got to Give It Up” and ordered Thicke and Williams to pay millions in statutory damages.

Source: Variety

Janet Jackson To Receive First-Ever "Impact" Award At The 2018 Radio Disney Music Awards

Let’s block ads! (Why?)

The Humor Mill

Want CRM Success? Serve the Servants First!

want crm success serve the servants first 625x352 Want CRM Success? Serve the Servants First!

Imagine this:

You’re the owner or manager of a small business. You’re facing mounting competitive pressures and shrinking margins. Differentiation has become harder and harder because your competitors are one click away on the Internet.

You need to create consistent, high-quality experiences to keep prospects and customers coming back for more.

What do you do?

You’re in luck! CRM was built for exactly this need, and many CRM vendors make it really easy to sign up.

So just give them your credit card and let the good times roll, right?

Not so fast!

CRM success is never guaranteed. The harsh reality is that most CRM projects are doomed to failure.

What does “failure” mean? It means money and time spent implementing software (or a “web application,” if you prefer) that falls into disuse six to eighteen months after implementation. It means using email and spreadsheets and accounting software to track sales sub-optimally. It means eliminating any chance at a decent understanding of how your business is doing. It means leaving you exposed to missed opportunities.

It means stranding you and your team back where you started.

In many ways, the biggest cost of a failed CRM rollout is the lost time—the six to eighteen months that you will never get back.

Time your competition used to build and advance. Time you could have used to accelerate your business…if only you had properly adopted CRM.

I’ve helped hundreds of small businesses successfully adopt CRM over the last (yikes!) twenty years, and I’ve learned a few tricks to CRM success the hard way. One of the most fundamental of these is a simple philosophy that we preach: managers buy, but salespeople use.

Yes, it’s a bit simplistic. Of course managers will be using CRM as well as salespeople. In fact, they had better be using CRM if you want your implementation to be successful!

But the truth is, if your focus is on reaching management goals—typically things like better visibility and tracking, greater efficiency and effectiveness, higher customer retention, and the like—you will spin your wheels and get nowhere very quickly.

As much as we all like to think everyone is in it for the good of the company, it just isn’t so. Telling people to use the CRM to meet management goals is a recipe for failure.

But all is not lost! Management goals are a byproduct of good management. Good management involves a ton of different elements, but in the context of successful CRM implementations, it means you need to make life easier for your people on the front lines.

These days, salespeople and customer service representatives are being asked to do more and more with their days. They juggle more prospects and accounts, learn new sales skills for the Internet-era, provide better follow-ups and touchpoints, and much, much more.

Your focus with CRM should be to make life easier for your team.

Do that, and all of a sudden, the CRM isn’t an enemy for your team to struggle with. It’s not a tool for Big Brother to watch over their shoulders,

Nope! It’s a friend that makes their lives better. It’s there to help serve prospects and customers. It’s there to help manage follow ups. It’s there to remind your team of the hundreds of things they need to do to be successful salespeople and customer service reps.

Bottom line: a well-implemented CRM puts more commission in your salespeople’s pockets.

Done right, CRM acts like an invisible personal assistant that helps your team deliver that consistent, high-quality experience that keeps prospects and customers coming back for more.

And if you can do that, you will meet your management goals as a natural result of good CRM usage.

By Peter Wolf
President, QuantaCRM

Follow me on Twitter @CRMWolf

About QuantaCRM

QuantaCRM is a Microsoft Gold partner out of Chicago, IL with nearly two decades of experience helping small and medium-sized businesses implement and succeed with CRM.

Our OnTrack CRM Success System enables CRM success from implementation to adoption and beyond, and our ever-expanding suite of Dynamics 365 add-ons and solutions ensures you get the most from your CRM.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Cumulative Update #12 for SQL Server 2014 SP2

The 12th cumulative update release for SQL Server 2014 SP2 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:

Let’s block ads! (Why?)

SQL Server Release Services

Avoiding another cryptocurrency ‘penis’ moment with WatermelonBlock and IBM Watson

 Avoiding another cryptocurrency ‘penis’ moment with WatermelonBlock and IBM Watson

It was a watershed moment in the wonderful world of cryptocurrencies, ICOs, and blockchain technology projects. Prodeum — which promised to revolutionize the fruit and vegetable industry — replaced its website, post-ICO, with a white screen that contained just one word.


It is unclear who perpetrated the scam. While the company looked like a legitimate blockchain startup based in Lithuania, there are various threads that suggest it was an individual in Colombia that perpetrated the scam. And while they only got away with a $ 22,000 worth of ETH (more than the $ 11 claimed in other articles on the subject), other scams have been more fruitful.

Confido managed to walk away with over $ 374k in November 2017.

Today, WatermelonBlock — an AI-powered investment and trading platform for cryptocurrency investors and traders — has announced it is integrating with IBM Watson’s AI computing platform to provide investors with real-time insights and detailed analysis to help identify scams like Prodeum and Confido.

WatermelonBlock takes keywords, hashtags, and metadata terms relating to cryptocurrencies and ICOs from a wide variety of social and traditional media APIs. IBM Watson then measures this data for sentiment. It also weighs each message author individually according to their social influence and reach.

WatermelonBlock then uses its algorithms to compute a percentage and index score for each network, known as the MelonScore.

So can this technology help with carefully constructed scams? Prodeum was hard to detect because it looked like a regular ICO, so how does the MelonScore help with those situations?

“WatermelonBlock is designed with retail consumers in mind,” Elliot Rothfield, cofounder and creative director at WatermelonBlock told me. “This scam is a product of a developing market. During an era of ferment, rapid growth and changing standards make discussion making difficult. By combining sentiment analysis — the voice of the people — with weighted influencer sentiment — the voice of the knowledgeable — users can circumvent being entangled in a ‘Penisgate’ controversy.”

In addition to helping investors avoid scams, WatermelonBlock is a useful source of intelligence for the ICO market in general, the majority of which are legitimate projects.

By continually scanning the internet for sentiment data and analyzing both tone and author credibility, the AI-powered market predictions can help investors to spot potential winners too. Whenever sentiment changes in a particular cryptocurrency, the system notifies users in real-time, giving them the opportunity to anticipate market fluctuations and inform appropriate action.

That being said, the MelonScore is not a predictor or future market value.

“The MelonScore is unique in that it will represent the sentiment of the masses with respect to cryptocurrency,” Rothfield said. “AI is used to create a ranking system unique to WatermelonBlock, built on big data sets gathered from social media, blogs, news, microsites and other public forums.”

The use of IBM Watson for AI-powered analysis is just the beginning for WatermelonBlock.

“WatermelonBlock is not just a single application but suite of AI analysis tools,” Rothfield said. “WatermelonAnalytics will be introduced soon as a small business sentiment analyzer. WatermelonAnalytics will allow businesses to search, analyze and compare individual phrases, hashtags or direct URLs to harness industry-specific insights. Users will be able to create their own private index, allowing them to track not only the sentiment of a brand, but the sentiment of certain phrases, products, and releases. WatermelonBlock’s AI and proprietary algorithms are versatile and will be used in many different products and industries. Stay tuned for WatermelonMusic too.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Secure Kafka Cluster

A while ago I’ve wrote Oracle best practices for building secure Hadoop cluster and you could find details here. In that blog I intentionally didn’t mention Kafka’s security, because this topic deserved dedicated article. Now it’s time to do this and this blog will be devoted by Kafka security only. 

Kafka Security challenges

1) Encryption in motion. By default you communicate with Kafka cluster over unsecured network and everyone, who can listen network between your client and Kafka cluster, can read message content.

the way to avoid this is use some on-wire encryption technology – SSL/TLS. Using SSL/TLS you encrypt data on a wire between your client and Kafka cluster.

Communication without SSL/TLS:

SSL/TLS communication:

After you enable SSL/TLS communication, you will have follow consequence of steps for write/read message to/from Kafka cluster:

2) Authentication. Well, now when we encrypt traffic between client and server, but here is another challenge – server doesn't know with whom it communicate. In other words, you have to enable some mechanisms, which will not allow to work with cluster for UNKNOWN users. The default authentication mechanism in Hadoop world is Kerberos protocol. Here is the workflow, which shows sequence of steps to enable secure communication with Kafka:

Kerberos is the trusted way to authenticate user on cluster and make sure, that only known users can access it. 

3) Authorization. Next step when you authenticate user on your cluster (and you know that you are working as a Bob or Alice), you may want to apply some authorization rules, like setup permissions for certain users or groups. In other words define what user can do and what user can't do. Sentry may help you with this. Sentry have philosophy, when users belongs to the groups, groups has own roles and roles have permissions.

4) Rest Encryption. Another one security aspect is rest encryption. It's when you want to protect data, stored on the disk. Kafka is not purposed for long term storing data, but it could store data for a days or even weeks. We have to make sure that all data, stored on the disks couldn't be stolen and them read with out encryption key.

Security implementation. Step 1 – SSL/TLS

There is no any strict steps sequence for security implementation, but as a first step I will recommend to do SSL/TLS configuration. As a baseline I took Cloudera's documentation. For structuring all your security setup, create a directory on your Linux machine where you will put all files (start with one machine, but later on you will need to do the same on other's Kafka servers):

$ sudo chown -R kafka:kafka /opt/kafka/security

$ sudo mkdir -p /opt/kafka/security

A Java KeyStore (JKS) is a repository of security certificates – either authorization certificates or public key certificates – plus corresponding private keys, used for instance in SSL encryption. We will need to generate a key pair (a public key and associated private key). Wraps the public key into an X.509 self-signed certificate, which is stored as a single-element certificate chain. This certificate chain and the private key are stored in a new keystore entry identified by selfsigned.

# keytool -genkeypair -keystore keystore.jks -keyalg RSA -alias selfsigned -dname "CN=localhost" -storepass 'welcome2' -keypass 'welcome3'

if you want to check content of keystore, you may run follow command:

# keytool -list -v -keystore keystore.jks

Alias name: selfsigned

Creation date: May 30, 2018

Entry type: PrivateKeyEntry

Certificate chain length: 1


Owner: CN=localhost

Issuer: CN=localhost

Serial number: 2065847b

Valid from: Wed May 30 12:59:54 UTC 2018 until: Tue Aug 28 12:59:54 UTC 2018

As the next step we will need to extract a copy of the cert from the java keystore that was just created:

# keytool -export -alias selfsigned -keystore keystore.jks -rfc -file server.cer

Enter keystore password: welcome2

Then create a trust store by making a copy of the default java trust store.  Main difference between trustStore vs keyStore is that trustStore (as name suggest) is used to store certificates from trusted Certificate authorities(CA) which is used to verify certificate presented by Server in SSL Connection while keyStore is used to store private key and own identity certificate which program should present to other party (Server or client) to verify its identity. Some more details you could find here. In my case on Big Data Cloud Service I've performed follow command:

# cp /usr/java/latest/jre/lib/security/cacerts /opt/kafka/security/truststore.jks

put it into truststore:

# ls -lrt

-rw-r–r– 1 root root 113367 May 30 12:46 truststore.jks

-rw-r–r– 1 root root   2070 May 30 12:59 keystore.jks

-rw-r–r– 1 root root   1039 May 30 13:01 server.cer

put the certificate that was just extracted from the keystore into the trust store (note: "changeit" is standard password):

# keytool -import -alias selfsigned -file server.cer -keystore truststore.jks -storepass changeit

check file size after (it's bigger, because includes new certificate):

# ls -let

-rw-r–r– 1 root root   2070 May 30 12:59 keystore.jks

-rw-r–r– 1 root root   1039 May 30 13:01 server.cer

-rw-r–r– 1 root root 114117 May 30 13:06 truststore.jks

It may seems too complicated and I decided to depict all those steps in one diagram:

so far, all those steps been performed on the single (some random broker) machine. But you will need to have keystore and trustore files on each Kafka broker, let's copy It (note, current syntax is working on Big Data Appliance, Big Data Cloud Service or Big Data Cloud at Customer):

# dcli -C "mkdir -p /opt/kafka/security"

# dcli -C "chown kafka:kafka /opt/kafka/security"

# dcli -C -f /opt/kafka/security/keystore.jks -d /opt/kafka/security/keystore.jks

# dcli -C -f /opt/kafka/security/truststore.jks -d /opt/kafka/security/truststore.jks

after doing all these steps, you need to make some configuration changes in Cloudera Manager for each node (go to Cloudera Manager -> Kafka -> Configuration): In addition to this, on each node, you have to change listeners in "Kafka Broker Advanced Configuration Snippet (Safety Valve) for kafka.properties"

Also, make sure, that in Cloudera Manager, you have security.inter.broker.protocol equal to SSL: After node restart, when all brokers up and running, let's test it:

# openssl s_client -debug -connect kafka1.us2.oraclecloud.com:9093 -tls1_2

Certificate chain

0 s:/CN=localhost


Server certificate




















No client certificate CA names sent

Server Temp Key: ECDH, secp521r1, 521 bits

SSL handshake has read 1267 bytes and written 441 bytes

New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384

Server public key is 2048 bit

Secure Renegotiation IS supported

Compression: NONE

Expansion: NONE


    Protocol  : TLSv1.2

    Cipher    : ECDHE-RSA-AES256-GCM-SHA384

    Session-ID: 5B0EAC6CA8FB4B6EA3D0B4A494A4660351A4BD5824A059802E399308C0B472A4


    Master-Key: 60AE24480E2923023012A464D16B13F954A390094167F54CECA1BDCC8485F1E776D01806A17FB332C51FD310730191FE

    Key-Arg   : None

    Krb5 Principal: None

    PSK identity: None

    PSK identity hint: None

    Start Time: 1527688300

    Timeout   : 7200 (sec)

    Verify return code: 18 (self signed certificate)

Well, seems our SSL connection is up and running. Time try to put some messages into the topic:

#  kafka-console-producer  –broker-list kafka1.us2.oraclecloud.com:9093  –topic foobar

18/05/30 13:56:28 WARN clients.NetworkClient: Connection to node -1 could not be established. Broker may not be available.

18/05/30 13:56:28 WARN clients.NetworkClient: Connection to node -1 could not be established. Broker may not be available.

reason of this error, that we don't have properly configured clients. We will need to create and use client.properties and jaas.conf files.

# cat /opt/kafka/security/client.properties




-bash-4.1# cat jaas.conf

KafkaClient {

      com.sun.security.auth.module.Krb5LoginModule required



# export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/security/jaas.conf"

 now you could try again to produce messages:

# kafka-console-producer –broker-list kafka1.us2.oraclecloud.com:9093  –topic foobar –producer.config client.properties

Hello SSL world

no any errors – already good! Let's try to consume message:

# kafka-console-consumer –bootstrap-server kafka1.us2.oraclecloud.com:9093 –topic foobar –from-beginning  –consumer.config /opt/kafka/security/client.properties

Hello SSL world

Bingo! So, we created secure communication between Kafka Cluster and Kafka Client and write a message there.

Security implementation. Step 2 – Kerberos

So, we up and run Kafka on Kerberized cluster and write and read data from a cluster without Kerberos ticket.

$ klist

klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_1001)

This is not how it's suppose to work. We assume that if we protect cluster by Kerberos it's impossible to do something without ticket. Fortunately, it's relatively easy to config communications with Kerberized Kafka cluster.

First, make sure that you have enabled Kerberos authentification in Cloudera Manager (Cloudera Manager -> Kafka -> Configuration):

second, go again to Cloudera Manager and change value of "security.inter.broker.protocol" to SASL_SSL: Note: Simple Authentication and Security Layer (SASL) is a framework for authentication and data security in Internet protocols. It decouples authentication mechanisms from application protocols, in theory allowing any authentication mechanism supported by SASL to be used in any application protocol that uses SASL. Very roughly – in this blog post you may think that SASL is equal to Kerberos. After this change, you will need to modify listeners protocol on each broker (to SASL_SSL) in "Kafka Broker Advanced Configuration Snippet (Safety Valve) for kafka.properties" setting: you ready for restart Kafka Cluster and write/read data from/to it.  Before doing this, you will need to modify Kafka client credentials:

$ cat /opt/kafka/security/client.properties





after this you may try to read data from Kafka cluster:

$ kafka-console-consumer –bootstrap-server kafka1.us2.oraclecloud.com:9093 –topic foobar –from-beginning  –consumer.config /opt/kafka/security/client.properties

Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner  authentication information from the user

Error may miss-lead you, but the the real reason is absence of Kerberos ticket:

$ klist

klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_1001)

$ kinit oracle


$ kafka-console-consumer –bootstrap-server kafka1.us2.oraclecloud.com:9093 –topic foobar –from-beginning  –consumer.config /opt/kafka/security/client.properties

Hello SSL world

Great, it works! now we have to run kinit everytime before read/write data from Kafka cluster. Instead of this for convenience we may use keytab. For doing this you will need go to KDC server and generate keytab file there:

# kadmin.local

Authenticating as principal hdfs/admin@BDACLOUDSERVICE.ORACLE.COM with password.

kadmin.local: xst -norandkey -k testuser.keytab testuser

Entry for principal oracle with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:oracle.keytab.

Entry for principal oracle with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:oracle.keytab.

Entry for principal oracle with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:oracle.keytab.

Entry for principal oracle with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:oracle.keytab.

Entry for principal oracle with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:oracle.keytab.

Entry for principal oracle with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:oracle.keytab.

kadmin.local:  quit

# ls -l

-rw——-  1 root root    436 May 31 14:06 testuser.keytab

now, when we have keytab file, we could copy it to the client machine and use it for Kerberos Authentication. don't forget to change owner of keytab file to person, who will run the script:

$ chown opc:opc /opt/kafka/security/testuser.keytab

Also, we will need to modify jaas.conf file:

$ cat /opt/kafka/security/jaas.conf

KafkaClient {

      com.sun.security.auth.module.Krb5LoginModule required





seems we are fully ready to consumption of messages from topic. Despite on we have oracle as kerberos principal on a OS, we connect to the cluster as testuser (according jaas.conf):

$ kafka-console-consumer –bootstrap-server kafka1.us2.oraclecloud.com:9093 –topic foobar –from-beginning  –consumer.config /opt/kafka/security/client.properties

18/05/31 15:04:45 INFO authenticator.AbstractLogin: Successfully logged in.

18/05/31 15:04:45 INFO kerberos.KerberosLogin: [Principal=testuser@BDACLOUDSERVICE.ORACLE.COM]: TGT refresh thread started.

Hello SSL world

Security Implementation Step 3 – Sentry

One step before we configured Authentication, which answers on question – who am I. Now is the time to set up some Authorization mechanism, which will answer on question – what am I allow to do. Sentry became very popular engine in Hadoop world and we will use it for Kafka’s Authorization. As I posted earlier Sentry have philosophy, when users belongs to the groups, groups has own roles and roles have permissions:

And we will need to follow this with Kafka as well. But we will start with some service configurations first (Cloudera Manager -> Kafka -> Configuration):

Also, it’s very important to add in Sentry config (Cloudera Manager -> Sentry -> Config) user kafka in “sentry.service.admin.group”:

 Well, when we know who connects to the cluster, we may restrict he/she from reading some particular topics (in other words perform some Authorization). 

Note: for perform administrative operations with Sentry, you have to work as Kafka user.

$ id

uid=1001(opc) gid=1005(opc) groups=1005(opc)

$ sudo find /var -name kafka*keytab -printf "%T+\t%p\n" | sort|tail -1|cut -f 2


$ sudo cp /var/run/cloudera-scm-agent/process/1171-kafka-KAFKA_BROKER/kafka.keytab /opt/kafka/security/kafka.keytab

$ sudo chown opc:opc /opt/kafka/security/kafka.keytab

obtain Kafka ticket:

$ kinit -kt /opt/kafka/security/kafka.keytab kafka/`hostname`

$ klist

Ticket cache: FILE:/tmp/krb5cc_1001

Default principal: kafka/kafka1.us2.oraclecloud.com@BDACLOUDSERVICE.ORACLE.COM


Valid starting     Expires            Service principal

05/31/18 15:52:28  06/01/18 15:52:28  krbtgt/BDACLOUDSERVICE.ORACLE.COM@BDACLOUDSERVICE.ORACLE.COM

    renew until 06/05/18 15:52:28

Before configuring and testing Sentry with Kafka, we will need to create unprivileged user, who we will give grants (Kafka user is privileged and it bypassed Sentry). there are few simple steps, create test user (unprivileged) on each Hadoop node (this syntax will work on Big Data Appliance, Big Data Cloud Service and Big Data Cloud at Customer):

# dcli -C "useradd testsentry -u 1011"

we should remember that Sentry heavily relies on the Groups and we have to create it and put "testsentry" user there:

# dcli -C "groupadd testsentry_grp -g 1017"

after group been created, we should put user there:

# dcli -C "usermod -g testsentry_grp testsentry"

check that everything is how we expect:

# dcli -C "id testsentry" uid=1011(testsentry) gid=1017(testsentry_grp) groups=1017(testsentry_grp) uid=1011(testsentry) gid=1017(testsentry_grp) groups=1017(testsentry_grp) uid=1011(testsentry) gid=1017(testsentry_grp) groups=1017(testsentry_grp) uid=1011(testsentry) gid=1017(testsentry_grp) groups=1017(testsentry_grp) uid=1011(testsentry) gid=1017(testsentry_grp) groups=1017(testsentry_grp)

Note: you have to have same userID and groupID on each machine. Now verify that Hadoop can lookup group:

# hdfs groups testsentry

testsentry : testsentry_grp

All this steps you have to perform as root. Next you should create testsentry principal in KDC (it's not mandatory, but more organize and easy to understand). Go to the KDC host and run follow commands:

# kadmin.local 

Authenticating as principal root/admin@BDACLOUDSERVICE.ORACLE.COM with password. 

kadmin.local:  addprinc testsentry

WARNING: no policy specified for testsentry@BDACLOUDSERVICE.ORACLE.COM; defaulting to no policy

Enter password for principal "testsentry@BDACLOUDSERVICE.ORACLE.COM": 

Re-enter password for principal "testsentry@BDACLOUDSERVICE.ORACLE.COM": 

Principal "testsentry@BDACLOUDSERVICE.ORACLE.COM" created.

kadmin.local:  xst -norandkey -k testsentry.keytab testsentry

Entry for principal testsentry with kvno 1, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:testsentry.keytab.

Entry for principal testsentry with kvno 1, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:testsentry.keytab.

Entry for principal testsentry with kvno 1, encryption type des3-cbc-sha1 added to keytab WRFILE:testsentry.keytab.

Entry for principal testsentry with kvno 1, encryption type arcfour-hmac added to keytab WRFILE:testsentry.keytab.

Entry for principal testsentry with kvno 1, encryption type des-hmac-sha1 added to keytab WRFILE:testsentry.keytab.

Entry for principal testsentry with kvno 1, encryption type des-cbc-md5 added to keytab WRFILE:testsentry.keytab.

Now we have all setup for unprivileged user. Time to start configure Sentry policies. As soon as Kafka is superuser we may run admin commands as Kafka user. For managing sentry settings we will need to use Kafka user. To obtain Kafka credentials we need to run:

$ kinit -kt /opt/kafka/security/kafka.keytab kafka/`hostname`

$ klist 

Ticket cache: FILE:/tmp/krb5cc_1001

Default principal: kafka/kafka1.us2.oraclecloud.com@BDACLOUDSERVICE.ORACLE.COM

Valid starting     Expires            Service principal

06/15/18 01:37:53  06/16/18 01:37:53  krbtgt/BDACLOUDSERVICE.ORACLE.COM@BDACLOUDSERVICE.ORACLE.COM

    renew until 06/20/18 01:37:53

First we need to create role. Let's call it testsentry_role:

$ kafka-sentry -cr -r testsentry_role

let's check, that role been created:

$ kafka-sentry -cr -r testsentry_role



[opc@cfclbv3872 ~]$  

as soon as role created, we will need to give some permissions to this role for certain topic:

$ kafka-sentry -gpr -r testsentry_role -p "Host=*->Topic=testTopic->action=write"

and also describe:

$  kafka-sentry -gpr -r testsentry_role -p "Host=*->Topic=testTopic->action=describe"

next step, we have to allow some consumer group to read and describe from this topic:

$ kafka-sentry -gpr -r testsentry_role -p "Host=*->Consumergroup=testconsumergroup->action=read"

$ kafka-sentry -gpr -r testsentry_role -p "Host=*->Consumergroup=testconsumergroup->action=describe"

next step is linking role and groups, we will need to assign testsentry_role to testsentry_grp (group automatically inherit all role's permissions):

$ kafka-sentry -arg -r testsentry_role -g testsentry_grp

after this, let's check that our mapping worked fine:

$ kafka-sentry -lr -g testsentry_grp


now let's review list of the permissions, which have our certain role:

$ kafka-sentry -r testsentry_role -lp





it's also very important to have consumer group in client properties file:

$ cat /opt/kafka/security/client.properties






after all set, we will need to switch to testsentry user for testing:

$ kinit -kt /opt/kafka/security/testsentry.keytab testsentry

$ klist 

Ticket cache: FILE:/tmp/krb5cc_1001

Default principal: testsentry@BDACLOUDSERVICE.ORACLE.COM

Valid starting     Expires            Service principal

06/15/18 01:38:49  06/16/18 01:38:49  krbtgt/BDACLOUDSERVICE.ORACLE.COM@BDACLOUDSERVICE.ORACLE.COM

    renew until 06/22/18 01:38:49

test writes:

$ kafka-console-producer –broker-list kafka1.us2.oraclecloud.com:9093 –topic testTopic –producer.config /opt/kafka/security/client.properties

> testmessage1

> testmessage2


seems everything is ok, now let's test a read:

$ kafka-console-consumer –bootstrap-server kafka1.us2.oraclecloud.com:9093 –topic testTopic –from-beginning  –consumer.config /opt/kafka/security/client.properties



now, for showing Sentry in action, I'll try to read messages from other topic, which is outside of allowed topics for our test group.

$ kafka-console-consumer –from-beginning –bootstrap-server kafka1.us2.oraclecloud.com:9093 –topic foobar –consumer.config /opt/kafka/security/client.properties

18/06/15 02:54:54 INFO internals.AbstractCoordinator: (Re-)joining group testconsumergroup

18/06/15 02:54:54 WARN clients.NetworkClient: Error while fetching metadata with correlation id 13 : {foobar=UNKNOWN_TOPIC_OR_PARTITION}

18/06/15 02:54:54 WARN clients.NetworkClient: Error while fetching metadata with correlation id 15 : {foobar=UNKNOWN_TOPIC_OR_PARTITION}

18/06/15 02:54:54 WARN clients.NetworkClient: Error while fetching metadata with correlation id 16 : {foobar=UNKNOWN_TOPIC_OR_PARTITION}

18/06/15 02:54:54 WARN clients.NetworkClient: Error while fetching metadata with correlation id 17 : {foobar=UNKNOWN_TOPIC_OR_PARTITION}

so, as we can see we could not read from Topic, which we don't authorize to read.

Systemizing all this, I'd like to put user-group-role-privilegies flow on one picture:

And also, I'd like to summarize steps, required for getting list of privileges for certain user (testsentry in my example):

// Run as superuser – Kafka

$ kinit -kt /opt/kafka/security/kafka.keytab kafka/`hostname`

$ klist 

Ticket cache: FILE:/tmp/krb5cc_1001

Default principal: kafka/cfclbv3872.us2.oraclecloud.com@BDACLOUDSERVICE.ORACLE.COM

Valid starting     Expires            Service principal

06/19/18 02:38:26  06/20/18 02:38:26  krbtgt/BDACLOUDSERVICE.ORACLE.COM@BDACLOUDSERVICE.ORACLE.COM

    renew until 06/24/18 02:38:26

// Get list of the groups which belongs certain user

$ hdfs groups testsentry

testsentry : testsentry_grp

// Get list of the role for certain group

$ kafka-sentry -lr -g testsentry_grp


// Get list of permissions for certain role

$ kafka-sentry -r testsentry_role -lp






Based on what we saw above – our user testsentry could read and write to topic testTopic. For reading data he should to belong to the consumergroup "testconsumergroup".

Security Implementation Step 4 – Encryption At Rest

Last part of security journey is Encryption of Data, which you store on the disks. Here there are multiple ways, one of the most common is Navigator Encrypt.

Let’s block ads! (Why?)

Oracle Blogs | Oracle The Data Warehouse Insider Blog

NDSolve missing dynamics

I am trying to use NDSolve to solve some a set ODEs coupled in a slightly slightly unusual, and am running into some problems.

The following code generates the equations of interest, where one the solution of one ODE is used as the initial condition for a second, generating a collection of 2D interpolating functions (apologies for the large matrix, this is the minimal example I could think of!):

(*Prepare vector of dependent variables*)
Rho[t_]= Flatten@Array[Subscript[r,#1,#2][t]&,{3,3}];
Lamb[t_, tau_]=Flatten@Array[Subscript[La,#1,#2][t,tau]&,{3,3}];

(*Include propagator matrix*)
lio ={{-0.0000506308, 0. + 1. I, 0, 0. - 1. I, 0, 0, 0, 0, 0}, {0. + 1.I, -9.72405, 0, 0, 0. - 1. I, 0, 0, 0, 0}, {0,  0, -4.72405, 0, 0, 0. - 1. I, 0, 0, 0}, {0. - 1. I, 0, 0, -9.72405,   0. + 1. I, 0, 0, 0, 0}, {0, 0. - 1. I, 0, 0. + 1. I, -10., 0, 0, 0,   0}, {0, 0, 0. - 1. I, 0, 0, -5., 0, 0, 0}, {0, 0, 0, 0, 0,   0, -4.72405, 0. + 1. I, 0}, {0, 0, 0, 0, 0, 0, 0. + 1. I, -5.,   0}, {0.0000506308, 0, 0, 0, 10., 0, 0, 0, 0}};

(*Solve the first ODE*) 
rho0 = {1, 0, 0, 0, 0, 0, 0, 0, 0};
Rsol = First@NDSolve[{D[Rho[t],t]==lio.Rho[t], Rho[0]==rho0}, Rho[t],{t,0,100}];

(*Solve the second ODE using the first as initial condition*) 
Lam0 = {0., 0., 0., 0., 0., 0., 0. + 1. Subscript[r, 2, 1][t], 0. + 1. Subscript[r, 2, 2][t], 0. + 1. Subscript[r, 2, 3][t]}/.Rsol;
LambSol = NDSolveValue[{D[Lamb[t,tau],tau]==lio.Lamb[t,tau], Lamb[t,0] == Lam0},Lamb[t,tau],{t,0,100},{tau,0,3}];

The issue arises when we compare the solutions in the t variable. Plotting RSol[[5]] and the matching element LambSol[[-2]], with tau = 0, we find:


TUvF7 NDSolve missing dynamics

As you can see the dynamics found from the second ODE solution (yellow) doesn’t match the converged solution from RSol (blue). I think the reason is that NDSolve is not taking enough points the along the t-axis in second NDSolve (given by the red points). I can fix this to an extent by decreasing MaxStepFraction, however taking this too small or for systems of equations that are too large makes the code take excessive amounts of time, or the Kernel crash.

Does anybody have insight into using NDSolve in this way? I’m at a loss of how to tackle this problem.

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Authorised Push Payment Fraud – The Liability Challenge

Push Payment Fraud Authorised Push Payment Fraud – The Liability Challenge

Last week, the National Board for Customer Disputes in Sweden, after reviewing cases referred to them, have ruled that banks should be liable for so-called “push payment” fraud losses over a certain amount.

Authorised push payment fraud, or APP fraud, is gaining in popularity in the criminal community. Customers are being tricked into authorising payments by persuasive social engineering schemes run by criminals. These criminals have been so successful that this kind of fraud even has a nickname: hypnofraud.

Fraudsters have always targeted the weakest link in the process. As systems become more and more secure, the weakest link has become the customers themselves.

The push payment fraud trend has sparked debate at Payment Services Providers (banks and other financial institutions), regulators and consumer bodies about who should foot the bill when these kinds of schemes are successful. In 2016, a super complaint by the UK consumer organization, Which, was filed which called for the PSPs to do more to stop this kind of fraud, and to take greater responsibility for the losses when customers fall for these scams.

The question of liability isn’t straightforward, as my colleague Sarah Rutherford noted in a recent post. On one hand, customers are being tricked by highly convincing, almost hypnotic fraudsters, often posing as representatives from a bank. Whilst the industry can educate consumers about this, we can’t expect all customers to be experts in identifying whether calls, emails or SMS are genuine or fraudulent. On the other hand, if a customer withdrew cash from an ATM and was persuaded to hand over that cash by a fraudster, no one would expect the bank to foot the bill.

Whilst regulators and consumer bodies around the world make their own judgements, there is something the banks can do to reduce the scale of this problem and make social engineering scams less successful. By analysing the way each customer normally uses their account — whether transactions are authenticated by them or not — they can detect transactions that are out of character and stop them before funds disappear from accounts.

Customer behaviour profiling is a key way to detect and stop fraud from taking place, whilst allowing a frictionless experience for customers going about their daily business. For more on this, see our posts on the FICO Blog: http://www.fico.com/en/blogs/tag/fraud/.

Let’s block ads! (Why?)


Why You Should Already Have a Data Governance Strategy

Garbage in, garbage out. This motto has been true ever since punched cards and teletype terminals. Today’s sophisticated IT systems depend just as much on good quality data to bring value to their users, whether in accounting, production, or business intelligence. However, data doesn’t automatically format itself properly, any more than it proactively tells you where it’s hiding or how it should be used. No, data just is. If you want your business data to satisfy criteria of availability, usability, integrity, and security, you need a data governance strategy.

Data governance in general is an overarching strategy for organizations to ensure the data they use is clean, accurate, usable, and secure. Data stakeholders from business units, the compliance department, and IT are best positioned to lead data governance, although the matter is important enough to warrant CEO attention too. Some organizations go as far as appointing a Data Governance Officer to take overall charge. The high-level goal is to have consistent, reliable data sets to evaluate enterprise performance and make management decisions.

Ad-hoc approaches are likely to come back to haunt you. Data governance has to become systematic, as big data multiplies in type and volume, and users seek to answer more complex business questions. Typically, that means setting up standards and processes for acquiring and handling data, as well as procedures to make sure those processes are being followed. If you’re wondering whether it’s all worth it, the following five reasons may convince you.

banner blog 2 Why You Should Already Have a Data Governance Strategy

Reason 1: Ensure data availability

Even business intelligence (BI) systems won’t look very smart, if users cannot find the data needed to power them. In particular, self-service BI means that the data must be easy enough to locate and to use. After years of hearing about the sinfulness of organizational silos, it should be clear that even if individual departments “own” data, the governance of that data must be done in the same way across the organization. Authorization to use the data may be restricted, as in the case of sensitive customer data, but users should not ignore its existence, when it could help them in their work.

Availability is also a matter of having appropriate data that is easy enough to use. With a trend nowadays to store unstructured data from different sources in non-relational databases or data lakes, it can be difficult to know what kind of data is being acquired and how to process it. Data governance is therefore a matter of first setting up data capture to acquire what your enterprise and its different departments need, rather than everything under the sun. Governance then also ensures that data schemas are applied to organize data when it is stored, or that tools are available for users to process data, for example to run business analytics from non-relational (NoSQL) databases.

Reason 2: Ensure users are working with consistent data

When the CFO and the COO work from different sets of data and reach different conclusions about the same subjects, things are going to be difficult. The same is true at all other levels in an enterprise. Users must have access to consistent, reliable data, so that comparisons make sense and conclusions can be checked. This is already a good reason for making sure that data governance is driven across the organization, by a team of executives, managers, and data stewards with the knowledge and authority to make sure the same rules are followed by all.

Global data governance initiatives may also grow out of attempts to improve data quality at departmental levels, where individual systems and databases were not planned for information sharing. The data governance team must deal with such situations, for instance, by harmonizing departmental information resources. Increased consistency in data means fewer arguments at executive level, less doubt about the validity of data being analyzed, and higher confidence in decision making.

Reason 3: Determining which data to keep and which to delete

The risks of data hoarding are the same as those of physical hoarding. IT servers and storage units full of useless junk make it hard to locate any data of value or to do anything useful with it afterwards. Users use stale or irrelevant data as the basis for important business decisions, IT department expenses mushroom, and vulnerability to data breaches increases. The problem is unfortunately common. 33% of the data stored by organizations is simply ROT (redundant, obsolete, or trivial), according to the Veritas Data Genomics Index 2017 survey.

Yet things don’t have to be that way. Most data does not have to be kept for decades, “just in case.” As an example, retailing leader Walmart uses only the last four weeks’ transactional data for its daily merchandising analytics. It is part of good data governance strategy to carefully consider which data is important to the organization and which should be destroyed. Data governance also includes procedures for employees to make sure data is not unnecessarily duplicated, as well as policies for systematic data retirement (for instance, for archiving or destruction) according to age or other pertinent criteria.

Reason 4: Resolve analysis and reporting issues

An important dimension in data governance is the consistency across an organization of its metrics, as well as the data driving them. Without clearly recorded standards for metrics, people may use the same word, yet mean different things. Business analytics are a case in point, when analytics tools vary from one department to another. Self-service analytics or business intelligence can be a boon to an enterprise, but only if people interpret metrics and reports in a consistent way.

When reports lack clarification, the temptation is often to blame technology. The root cause, however, is often the mis-configuration of the tools and systems involved. It may even be in their faulty application, as in the case of reporting tools being wrongly applied to production databases, triggering problems in performance that mean that neither transactions nor analytics are satisfactorily accomplished. Ripping out and replacing fundamentally sound systems is not the solution. Instead, improved data governance brings more benefit, faster, and for far less cost.

Reason 5: Security and compliance with laws concerning data governance

Consequences for non-compliance with data regulations can be enormous, especially where private individuals’ information is concerned. A case in point, the European General Data Protection Regulation (GDPR) for May 2018 sets non-compliance fines up to some $ 22 million or four percent of the offender’s worldwide turnover, whichever is the higher, for data misuse or breach affecting European citizens.

Effective data governance helps an organization to avoid such issues, by defining how its data is to be acquired, stored, backed up, and secured against accidents, theft, or misuse. These definitions also include provision for audits and controls to ensure that the procedures are followed. Realistically, organizations will also conduct suitable awareness campaigns to makes sure that all employees working with confidential company, customer, or partner data understand the importance of data governance and its rules. Education and awareness campaigns will become increasingly important as user access to self-service solutions increases, as will the levels of data security already inherent in those solutions.


If you think about data as a strategic asset, the idea of governance becomes natural. Company finances must be kept in order with the necessary oversight and audits, workplace safety must be guaranteed and respect the relevant regulations, so why should data – often a key differentiator and a confidential commodity – be any different? As IT self-service and end-user empowerment grow, the importance of good data governance increases too. Business user autonomy in spotting trends and taking decisions can help an enterprise become more responsive and competitive, but not if it is founded on data anarchy.

Effective data governance is also a continuing process. Policy definition, review, adaptation, and audit, together with compliance reviews and quality control, are all regularly effected or repeated as a data governance life cycle. As such, data governance is never finished, because new sources, uses, and regulations about data are never finished either. For contexts such as business intelligence, especially in a self-service environment, good data governance helps users to use the right data in the right way, to generate business insights correctly and take sound business decisions.

banner blog 2 Why You Should Already Have a Data Governance Strategy

Tags: |

Let’s block ads! (Why?)

Blog – Sisense

Terminology Check – What is a Power BI App?

Thumbnail Terminology Check   What is a Power BI App?

Let’s say you just heard someone mention a Power BI app. What exactly do they mean by that? Well, it depends. The term “app” is used kind of a lot in the Power BI world. So, here’s a quick reference to help you decode the conversation. I’m going to start with the most likely options, working down to other options. Which one someone is referring to really depends on their role and their level of familiarity with the Power BI ecosystem.

Power BI App

A Power BI App is a packaged up set of content in the web-based Power BI Service. Related reports, workbooks, dashboards, and datasets are published from an App Workspace into an App for users to consume. 

Power BI App Workspace

An App Workspace in the Power BI Service is where reports, workbooks, dashboards, and datasets are saved, and where data refresh schedules and other settings are defined. An App Workspace is suited to development & collaboration with coworkers (whereas My Workspace is a private area). Smaller teams might do everything they need to do within an App Workspace, whereas larger teams use an App Workspace as the collaboration area for content before it gets published to a Power BI App for consumption. You can have quite a few App Workspaces, depending on how you organize content (for instance, by subject area, by project, by department, or by type of analysis). 

Power BI Mobile App

There are iOS, Android, and Windows mobile apps for consuming Power BI content. In addition to displaying content from the Power BI Service, the mobile apps can also display content from SQL Server Reporting Services and Power BI Report Server. 

Power BI Desktop Application

Power BI Desktop is a client application which is installed on a user’s PC. Its purpose is for creating queries, data models, relationships, calculations, and reports for Power BI. Power BI Desktop can be downloaded from the web. However, it’s recommended to use the Windows Store instead because updates are installed automatically, even if you don’t have admin rights on your machine. The automatic updates are very helpful because Power BI Desktop is updated once per month, as well as bug fixes here and there.


There are three tools in the Business Applications Group currently: Power BI, Flow, and PowerApps. PowerApps is an Office 365 feature that allows you to pretty easily build line-of-business applications with low code or no code. There are lots of possibilities for integration between these three products. For instance, you can display a Power BI report in a PowerApps app, or you can display a PowerApps input screen within a Power BI dashboard, or you can have a Power BI alert trigger a Flow which causes something else to happen in a workflow. 


AppSource is like a marketplace to search for line-of-business applications for Power BI, Office 365, Dynamics 365, as well as other products and services. Published offerings can be specific to your organization (such as a Power BI App discussed above), from third parties (like Salesforce), or from partner companies (such as my employer, BlueGranite). 

You Might Also Like…

Checklist for Finalizing a Data Model in Power BI Desktop

Why the Default Summarization Property in Power BI is So Important

Let’s block ads! (Why?)

Blog – SQL Chick

WATCH: Stand Up/Sketch Spotlight Featuring Comedian/Actor Rell Battle

Rell Battle WATCH: Stand Up/Sketch Spotlight Featuring Comedian/Actor Rell Battle

Rell Battle thinks protests start too early and black lives should also matter at brunch.

Watch his stand up below from The Conan Show;

‘Desus & Mero’ Stars Ending Their Viceland Show And Moving To Showtime

Let’s block ads! (Why?)

The Humor Mill