Category Archives: Business Intelligence

Former IBM employee pleads guilty to ‘economic espionage’ after stealing trade secrets for China

 Former IBM employee pleads guilty to ‘economic espionage’ after stealing trade secrets for China

A former developer for IBM pled guilty on Friday to economic espionage and to stealing trade secrets related to a type of software known as a clustered file system, which IBM sells to customers around the world.

Xu Jiaqiang stole the secrets during his stint at IBM from 2010 to 2014 “to benefit the National Health and Family Planning Commission of the People’s Republic of China,” according to the U.S. Justice Department.

In a press release describing the criminal charges, the Justice Department also stated that Xu tried to sell secret IBM source code to undercover FBI agents posing as tech investors. (The agency does not explain if Xu’s scheme to sell to tech investors was to benefit China or to line his own pockets.)

Part of the sting involved Xu demonstrating the stolen software, which speeds computer performance by distributing works across multiple servers, on a sample network. The former employee acknowledged that others would know the software had been taken from IBM, but said he could create extra computer script to help mask his origins.

Xu, who is a Chinese national who studied computer science at the University of Delaware, will be sentenced on October 13.

The Justice Department’s press release does not identify IBM, but instead refers to “the Victim Company.” But other news outlets name IBM as the target of the theft, while a LinkedIn page with Xu’s name shows he worked at IBM as a file system developer during the relevant dates.

IBM did not immediately respond to request for comment on Sunday.

This isn’t the first time that Chinese nationals have carried out economic espionage against American companies. In 2014, the Justice Department charged five Chinese hackers for targeting U.S. nuclear and solar energy firms. And late last year, the agency charged three others for hacking U.S. law firms with the goal of trading on insider information that they obtained.

This story originally appeared on Fortune.com. Copyright 2017

Let’s block ads! (Why?)

Big Data – VentureBeat

Thoughts on the 2017 KDNuggets Poll on Data Science Tools

toast Thoughts on the 2017 KDNuggets Poll on Data Science Tools

Are you a data scientist and do not know KDNuggets.com?  How is this possible?  Ok, go there right now, add a bookmark, and make this part of your daily reading list.  But don’t forget to come back here afterwards to read the rest of this post.

KDNuggets is one of the most popular portals for data science, and is a great source for news and information.  It probably will not be winning a design award any time soon.  But the rich, deep content is why you will go back over and over, and that’s what really matters.

I spend more time on KDNuggets than usual in May, because that’s when the annual KDNuggets poll What data science solution did you use in the past 12 months? comes out. Gregory Piatetsky-Shapiro, the editor of KDNuggets and one of the best-known data scientists in the world, has been doing this poll for 18 years.

Gregory just published the results for 2017, and about 2,900 people have shared their software preferences for data science tools. And as always, there is a lot to learn from those results.

What’s new in data science in 2017?

First things first: RapidMiner was again voted as the most popular general data science platform and this is all thanks to our user community!  33% of all voters said that they are using RapidMiner, which is an amazing result. Many thanks to all of you!

But we know that data scientists are using up to 6 different tools in parallel so besides RapidMiner, what other tools are people using?

Let’s start with the programming languages. It should not come as a surprise that R and Python are the two leading languages for data science.  This year, Python got slightly more votes than R which might not be a significant difference really.  But in general Python has shown the bigger growth rates in the previous years, and I would not be surprised to see Python to take over the leading position over R in the future.  And then there is of course SQL, which made the third place among the programming languages.  SQL will of course never die, so no surprise here.

Connected to Python growth is Anaconda, a Python distribution with package management. Big shout out to our friends at Anaconda for growing that quickly!

On an infrastructure level, Apache Spark was used by 23% of all data scientists but Hadoop only by 7%.  And while we are talking about big data, the library MLLib only was used by 5% and hence much less than many other options.  To be honest, this was a bit of a surprise to me.

Deep Learning is all the rage

Yes, I am guilty for not playing along with the crazy deep learning hype of the past few years.  After all, the technology is much less innovative than most people believe. But I will admit that there is a strong growth trend around deep learning in our field.

This year, more than 32% of all data scientists said that they are using deep learning, up from 18% in 2016 and 9% in 2015.  Doubling every year is impressive growth indeed.

There are now a dozen or so deep learning libraries.  The most widely used one of course is Google’s Tensorflow, now used by 20% of all data scientists.

RapidMiner’s history with the KDNuggets poll

I view this poll a bit like a sporting event. It won’t make or break a vendor, but I at least take it serious. I think all vendors should take it seriously, and it looked like more vendors did this year.

The history of RapidMiner in the poll is interesting as well.  In 2006, our co-founder Ralf Klinkenberg was already why YALE was not an option in the poll (YALE was the former name of RapidMiner, and an acronym for “Yet Another Learning Environment”).  Who could know that only 11 years later machine learning would be all the hype?

RapidMiner was first included in the poll in 2007, and YALE was the most widely used open source platform from the start.  But some of our commercial competitors like SAS and SPSS were ahead of us back then.  But thanks to our loyal community and user base this changed quite quickly.  In 2008, we ended up just shy of SPSS Clementine (which later became SPSS Modeler).  We remained in the top 3 for a couple of years, and during that time other open source solutions like R started to gain more traction in the poll.

Starting In 2011, RapidMiner took over first place among all data science platform tools, and we have been able to keep this position since then.  One of the great things, however, is that data scientists now have many different approaches and often mix and match the different solutions.  There are clearly leading data science platforms like RapidMiner and in addition we have two great programming languages for data science as well, namely R and Python.

And then there are dozens of libraries like MLLib or Tensorflow, most of them accessible through RapidMiner as well.  So, you will be able to find the right tool for your problem and this is a wonderful situation to be in for data scientists.  Compare this to software offerings in the earlier years of this poll (check out the links above).

It’s a great time to be a data scientist indeed!

Let’s block ads! (Why?)

RapidMiner

Simultaneous Auto-growth in Multiple Files

SQL Server 2016 has a new configuration to control the auto-growth of multiple files in the same filegroup. When we create several files in the same filegroup SQL Server does a round robin across the files, writing a piece of data in each of them until all the data is finally on the files.

However, the amount of data written in each file may not be always the same. The algorithm SQL Server uses for the round robin takes into account the amount of free space in each file. Due to that, to ensure an even data distribution across the files, we need to keep the files with the same size.

If the auto-growth happens, one file will be bigger than the other, therefore the data distribution across the files will be unbalanced.

Starting in SQL Server 2016 we have a solution for this problem: The filegroups now have the “autogrow_all_files” attribute.  This attribute ensures that all files will grow together, keeping the same size.

Let’s execute a demo, step by step.

1) Create a new database. The statement below creates the database with two filegroups, the Primary and another one called FG1. You need to correct the path of the files before execute this statement.

CREATE DATABASE sales 

ON PRIMARY 

        (NAME = sales_dat, filename =
                ‘C:\MyFolder\Sales.mdf’, size = 8mb, maxsize = 500mb, filegrowth = 20% ),
filegroup fg1 — Default
        (NAME = sales_dat2, filename = ‘C:\MyFolder\Sales2.ndf’, size = 8mb, maxsize =
                 500mb, filegrowth = 20% ),

        (NAME = sales_dat3, filename =
                 ‘C:\MyFolder\Sales3.ndf’, size = 8mb, maxsize = 500mb, filegrowth = 20% )

log ON
        (NAME = sales_log, filename = ‘C:\MyFolder\Sales.ldf’, size = 20mb, maxsize =
                 unlimited, filegrowth = 10mb );
go 

2) Check the filegroups configuration. The result, in the image below, shows the default value of the attribute autogrow_all_files, disabled.

USE sales
go
SELECT NAME,is_autogrow_all_files
FROM   sys.filegroups 

AutogrowthAll05 Simultaneous Auto growth in Multiple Files

3) Let’s create a table in filegroup FG1, insert 2000 records and check the database files. The autogrowth didn’t happen yet.

CREATE TABLE test 
  (
     id    INT IDENTITY(1, 1) PRIMARY KEY,
     texto CHAR(8000)
  )
ON fg1
go
INSERT INTO test 
VALUES      (‘x’)
go 2000
EXEC Sp_helpfile
go 

AutogrowthAll02 Simultaneous Auto growth in Multiple Files

4) Using the DMV ‘sys.dm_db_database_page_allocations’ we can identify the data distribution across the files.

select extent_file_id,count(*)
 from
sys.dm_db_database_page_allocations
     (DB_ID(‘Sales’),OBJECT_ID(‘test’),1,1,‘DETAILED’)
group by extent_file_id

go

AutogrowthAll03 Simultaneous Auto growth in Multiple Files

5) Let’s insert more 20 records and check the files again. The auto-growth already happened in only one of the files.

insert into test values (‘x’)

go 20

exec sp_helpfile

AutogrowthAll04 Simultaneous Auto growth in Multiple Files

This result will unbalance the round-robin, reducing any advantage it’s creating for the environment. Let’s try the same demonstration again, this time changing the autogrow_all_files attribute of FG1 filegroup.

1) Drop and re-create the database, changing the autogrow_all_files and checking the change. Again, you need to correct the path of the files.

use master

go

drop databaseif exists Sales;

go

CREATE DATABASE Sales
 on Primary
  (NAME = Sales_dat, FILENAME = ‘C:\MyFolder\Sales.mdf’, SIZE = 8MB, MAXSIZE = 500MB, FILEGROWTH = 20% ),
 Filegroup FG1 — Default
  (NAME = Sales_dat2, FILENAME = ‘C:\MyFolder\Sales2.ndf’, SIZE = 8MB, MAXSIZE = 500MB, FILEGROWTH = 20% ),
  (NAME = Sales_dat3, FILENAME = ‘C:\MyFolder\Sales3.ndf’, SIZE = 8MB, MAXSIZE = 500MB, FILEGROWTH = 20% )
LOG ON
  (NAME = Sales_log, FILENAME = ‘C:\MyFolder\Sales.ldf’, SIZE = 20MB, MAXSIZE = UNLIMITED, FILEGROWTH = 10MB );

go

alter database Sales modify filegroup [FG1] AutoGrow_All_Files
      With Rollback Immediate

go

use sales

go

select name,is_autogrow_all_files
 from sys.filegroup

AutogrowthAll05 Simultaneous Auto growth in Multiple Files

2) Create the table, insert 2000 records and check the files.

create table test 
( id int identity(1,1) primary key,
  texto char(8000))
  on FG1

go

insert into test values (‘x’)

go 2000

exec sp_helpfile

go

AutogrowthAll06 Simultaneous Auto growth in Multiple Files

3) Insert 20 more records and check the files again. Now the auto-growth happened in both files, keeping the data distribution even across the files.

insert into test values (‘x’)

go 20 

exec sp_helpfile

AutogrowthAll07 Simultaneous Auto growth in Multiple Files

Let’s block ads! (Why?)

SQL – Simple Talk

Alipay Reaches U.S. Payments Deal

Ant Financial’s Alipay service has inked a deal with U.S. payment processing services provider First Data Corp to allow Alipay users to purchase from more than four million American vendors.

Souheil Badran, Alipay’s North America president, said this will make Alipay ubiquitous in the U.S. and enable it to enter more countries. Instead of operating independently in the U.S., they prefer to cooperate with ecosystems with certain scale.

In China, Alipay and WeChat payment hold combined market share of over 90% in the Chinese mobile payment market. At present, Alipay hopes to provide a wider range of services to Chinese tourists who go overseas. Alipay’s mobile wallet already supports credit cards of American Express, Visa, and MasterCard. So far, over 100,000 retailers in 70 international markets accept payments via Alipay.

By cooperating with First Data, Alipay will be able to lift its penetration level in the U.S. to the same as Apple Pay. Apple’s chief executive officer Tim Cook recently revealed on a conference call that Apple’s mobile payment service is now available at 4.5 million places in the U.S.

Let’s block ads! (Why?)

ChinaWirelessNews.com

Individual Excellence vs. Organizational Impact: Know the Difference!

image thumb 4 Individual Excellence vs. Organizational Impact: Know the Difference!

Guess Which One Grows Your Career (or Company) More? (Hint: It’s the One on the Right)
(But Individual Excellence is a Prerequisite for Org-Wide Impact)

  -Marshall Mathers

image thumb 10 Individual Excellence vs. Organizational Impact: Know the Difference!Last week’s post on learning DAX before learning M mostly met with positive reviews, but also drew some fire.  A few staunch M supporters showed up and voiced their disagreement – including the one and only Ken Puls.  Now I know from experience not to mess with Ken…  OK OK, I confess…  messing with Ken is a barrel of monkeys actually.  Put it on your bucket list.  That said, I have immense respect for his skills and perspective, and only enjoy messing with him out of friendship.  He’s an amazing human being and learns more things in a year than I could ever squeeze into my leaky head over a lifetime.

But I still firmly believe in what I said.  I’m not here to offer apologies – only clarification and justification.  There very much IS an olive branch in all of this, but again, that comes merely from clarification.

First clarification:  I love love LOVE Power Query and M!  They are a godsend!  I never said what some people thought I was saying, which was “meh, you can ignore/neglect that part of the platform.”  Nope, you absolutely benefit tremendously from both.

The minor tension from last week raised a MASSIVELY important point, one that transcends any technical debate and puts things in their proper perspective.  So I’m grateful for the opportunity that the misunderstanding provides us.  Let’s begin with…

image thumb 11 Individual Excellence vs. Organizational Impact: Know the Difference!

At first this sounds like an incredibly abstract question.  I mean, how can you put a dollar figure on massive gains in personal efficiency?  Sounds impossible, right?

But I’ve got a card up my sleeve:  how much is your salary?  It’s not terribly outlandish to say that the best you could EVER do, in terms of speeding up the tasks in your workday, is to completely make yourself redundant.

And the market has already put a dollar value on THAT, right?  It’s called your salary.  No, it’s not a perfect number, not at all.  Many of you reading this are criminally underpaid in some very real sense, for instance.  But this is what the market is saying today, and that’s a very real quantity.  Furthermore it’s not like it’s really possible to automate 100% of your duties using ANY of the tools available today (thankfully!), so it’s somewhat “gracious” to set the max at 100% of salary.

The folks applying for our Principal Consultant role lately have ranged in current salary from a definitely-criminal $ 40k on the low end to a damn-near-executive $ 160k on the high end.

So let’s continue with the “gracious” theme and go with the high end:  $ 160k per year is a serviceable maximum ROI for “making my current job run faster.”

You might be thinking, at this point, that I’m still not being Gracious Enough.  It’s possible, after all, for a single hyper-efficient individual to suddenly replace MULTIPLE other individuals right?  Setting aside the distasteful notion of those lost jobs for a moment, I still think my $ 160k figure isn’t that bad, given that it’s 100% of an entire individual on the high end of the range.  But fine, if you want to multiply it by 3 and make it $ 480k, I don’t think that necessarily undermines any of the points I want to make.  I’m in the business of adding more zeroes to the productivity multiplier, not linear multiplications.

que01 thumb Individual Excellence vs. Organizational Impact: Know the Difference!At first, nothing.  Both DAX* and M, in the early going, are BOTH very much “speed up my current workflow” kind of things.  And that’s perfectly natural – what you’re currently doing is ALWAYS the best place to start, the best place to learn.

(* Remember, when I say “DAX,” I use that as shorthand for “DAX and Modeling,” where “Modeling” is best described as “figuring out how many tables you should have, what they should look like, and how they are related).

And that “improve what I’m currently doing” lens is why M/Power Query steals the show in the earliest demos – it’s easier to see how it’s going to change your life, because it automates/accelerates a larger percentage of what Excel Pros have traditionally done.

Hold this thought for a moment while I introduce something else…

image thumb 8 Individual Excellence vs. Organizational Impact: Know the Difference!This is a real thing, it belongs to one of our clients, and we helped them build it.  To call it a “workbook” is a bit of an insult of course, because it’s a modern marvel – an industrial-strength DAX model with a suite of complementary scorecards as a frontend.  But all built in Excel.  (Power Pivot, specifically).

And this model has provably returned about $ 25M a year to the bottom line for this client.  As in, profit.  Not revenue.  Pure sweet earnings.  This workbook is visible on their quarterly earnings reports to Wall Street.

And this wonder of the modern world is well into its fourth year of service now, bringing its lifetime “winnings” into the $ 100 Million range.  No lie.  This happened, and continues to happen.  This “workbook” is a tireless machine that makes it rain money.

Let’s do some math:  $ 25M per year vs. $ 160k per year is… 150x.

In other words, the ROI of this project went FAR beyond any amount of “accelerating what we already did.”  It was, instead, a MASSIVE dose of “doing something we’ve NEVER done before.”

image thumb 12 Individual Excellence vs. Organizational Impact: Know the Difference!This may sound like a cheap verbal trick, but I sincerely think it is a weighty truth that everyone should internalize.  The workbook above, which now in some sense runs the show at this large client, had no predecessor whatsoever (its “forerunners” were a scattered collection of hundreds of distinct reports, each of which was just burying readers in borderline-raw data).  For an even bigger example, consider that Instagram started as a hybrid of Foursquare and Mafia Wars before deciding to go “all-in” on their most popular feature, photo sharing.  The blank canvas has no ceiling, if you permit me to mash-up some metaphors, and both of these success stories are rooted in a combination of analytics and courage.

What we’ve been doing traditionally, in both the traditional Excel and traditional BI worlds, is nothing to brag about.  Most of our reporting and analysis output has been, traditionally-speaking, designed by the path of least resistance – as opposed to defined by careful and creative thinking about what truly matters.  The president of one of our clients/partners’ told us last week, “people tend to measure what they can easily count, as opposed to what they SHOULD measure,” and I just about leapt out of my chair screaming “YES!  PRECISELY!”

If you want to learn more about this topic, start with Ten Things Data Can Do For You and We Have a “Crush” on Verblike Reports.  For now, it’s time to move on to Leverage.

image thumb 13 Individual Excellence vs. Organizational Impact: Know the Difference!

You wanna know why Data is so “Hot” these days?  It’s because of Leverage.  Data is hot precisely because proper application of data can impact the behavior and productivity of MANY people simultaneously.  You can’t typically save or make millions of incremental dollars as an individual, but it’s “easy” to do if you can magnify benefits across dozens of other people – or hundreds, or even thousands (as is the case with the $ 100M workbook).

In fact, it’s worth considering that the $ 100M Workbook actually offers only modest benefit!  On a per-person basis, on a single day, you wouldn’t even notice the difference.  But multiply that modest, say, 3% benefit across tens of thousands of people and 365 days…  and you get $ 25M per year.  When you have the power of Leverage, you don’t even have to find something “big,” like the Instagram “pivot” from one mission to another, to get something BIG.

We are all, everyone reading this, INCREDIBY FORTUNATE to be working in data, because of its somewhat-unique capacity for leverage.  So many jobs, whether white- or blue-collar, are essentially cogs in the machine, and the top-end benefits they provide are limited to the “just you” size.  But WE have hit the jackpot.  WE have a job that is “unfairly” capable of leverage.  It just fell into our laps.  But then, the mind-numbing dosages of VLOOKUP (in the traditional Excel world) and endless documentation and miscommunication of requirements (in the traditional BI world) deflected us off into a relatively un-ambitious mindset.

Data has ALWAYS had the advantage of Leverage, but the traditional methodologies and tools brought tremendous friction and inertia to the table.  They wore us down – in terms of time, money, and psychic energy.  They “chokepointed” our potential.  They enforced a terribly-linear culture of thinking.  In short, the traditional tools took the potential 100x or even 1,000x leverage possibilities of Data and tamped them down to about 10x – still good!  But so much less than what we COULD do.

Well guess what?  No more chokepoint.  Whatever you want to call it – Power BI, Power Pivot, Modern Excel – the next-generation toolset from Microsoft gives us those extra zeroes of potential.

Note the quotation marks in that heading, because the next section is more conciliatory, but there IS something very important to bring home here.

If you MADE me choose one or the other, I’d definitely choose DAX, because I think it offers us the virtually-unlimited twin powers of  WWNDB and Leverage.  In fact, I don’t think that, I know that – I (and my companies) have been blowing people’s doors off with this new toolset since 2010.  We didn’t even get Power Query until what, 2014?  Fully half the lifetime of this revolution pre-dates M.  Even the $ 100M Workbook predates M!  Heck, until Power Update came along, you couldn’t even schedule refreshes of models that relied on M, which almost by definition “funneled” M usage down the “just for me” path – and to this day, Microsoft still hasn’t finally released a server that natively runs M.

I just don’t think it’s nearly as easy to explore/exploit WWNDB or Leverage via the M path.  Not impossible, because there are plenty of exceptions that prove the rule.  And to be clear, I think most of the exceptions will be in the WWNDB category, not the Leverage category.

And that was kinda my whole point in last week’s article – Power Query dramatically captures the attention of new converts to Modern Excel precisely because of how well it fits and improves What We’ve Already Been Doing, as Individuals.  This is a Good Thing!  No caveats needed.  I just don’t want anyone to become so distracted with it that we miss the Big Wins of WWNDB and Leverage.

image thumb 14 Individual Excellence vs. Organizational Impact: Know the Difference!

This is What We Can Do With “Just” DAX and Modeling

The picture above illustrates how a single individual (you, or a member of your team) can achieve wins MUCH bigger than just them.  And it’s my experience-powered belief that you cannot get a Win of that size without leveraging DAX and Modeling.

But what if you THEN take that single individual’s newfound powers of WWNDB and Leverage, provided by DAX and Modeling, and now make THAT person more efficient?  “Holy Additional Multiplier, Batman!”

image thumb 15 Individual Excellence vs. Organizational Impact: Know the Difference!

If Our DAX Modeler Superhero “Levels Up” with the Efficiency Gains of Power Query / M…  Look Out!

Yeah, if you take THAT person, and make THEM more efficient, WOW, you can do EVEN MORE of the amazing, transformational, WWNB-and-Leverage style work.

Which we can all agree…  is a Very Good Thing.  One of my favorite personal sayings is “the length of a rectangle is not more ‘responsible’ for the area of the rectangle than the width.”  Double either one, and you double the area.  But that’s essentially my point in a nutshell – you can 100x with DAX and modeling, AND you can double with M.  If you had to choose one, choose 100x.  But we don’t have to choose.  Adding Power Query and M to your org-wide-impact powers, even if it’s “just” 2x or 3x, delivers JUST AS MUCH, or more, incremental Big Win as the original 100x.

We can have our flagons full of mead and drink them too, as Lothar once said.

Let’s block ads! (Why?)

PowerPivotPro

DAX “Reanimator” Series, Episode 1: Dynamic TopN Reports via Slicers

Power BI Report thumb DAX “Reanimator” Series, Episode 1: Dynamic TopN Reports via Slicers

Guess how many articles are here on PowerPivotPro.com?  Go ahead and think of a number, I’ll wait.

The answer, at time of writing, is 923.  Rob alone has published 715 articles!  And these date all the way back to 2009.

A lot of these articles are “old,” but folks, the DAX engine is still 99% the same today in Power BI (and Excel 2016) as it was when it first “hit the shelves” in Spring 2010.

The motivation behind this “Reanimator” series, then, is twofold:

  1. Help newer converts/readers rediscover some of the most-awesome techniques previously covered here (without being so lazy as re-posting them in their original form)
  2. “Refresh” those techniques for the brave new world of Power BI (since the vast majority of old articles were written when we only had Power Pivot)

What better way to do that than to re-create those workbooks in Power BI Desktop and embed the report directly…Within. This. Post! wlEmoticon smile DAX “Reanimator” Series, Episode 1: Dynamic TopN Reports via Slicers

A New Age of Self-Service BI Users

I’ve been fortunate enough to be given the honor of sharing with you, our community, all these wonderful posts written by many of our in-house industry experts. Updated in all their glory into the wonderful world of Power BI. Now you can click, slice, interact, touch (…dirty), and drill (dirtier!) with these reports to your hearts desire. Just as the BI gods intended them to be! My hope is that these updates will instill these tools to the growing number self-service BI users just getting into the field and who want to do AWESOME things with their reports.

Highlights From The Original Post(s)

So this update is actually a continuation of not just one…but TWO posts written by Rob in the distance past of 2012 (in technology years that’s basically forever). The two original posts were:

Dynamic TopN Reports Using PowerPivot V2!

Dynamic TopN Reports via Slicers, Part 2

Excel Report thumb DAX “Reanimator” Series, Episode 1: Dynamic TopN Reports via Slicers

Rob demos some pretty ingenious techniques using his (now prolific) disconnected slicers technique to not only control the Top N Number you’d like to see on charts or graphs, but also the Value that you want to see that Top N Number ranked on. I’ve used it in MANY reports I’ve made over the years, always impressing the customers who used them.

Now I don’t want to give too much away in this post, instead directing you back to the walkthrough via the links above. I’m just here to whet your appetite enough with some fancy Power BI Reports, and if you want to learn the DAX code, hop into Rob’s posts.

This “Picture” Below is an Interactive Power BI!

Isn’t Something Missing?

Some of our more avid blog readers may be thinking “wasn’t there a THIRD post about TopN filtering?”. Yes, in fact there was. It was written by guest contributor Colin Banfield and is called Dynamic TopN Reports via Slicers, Part 3. It’s a fantastic post which covers ways to add BottomN metrics, Month/Year slicers, and more. I chose not to use that workbook since I wanted to capture the core story from the original posts written by Rob. If you’re inclined however, I recommend reading all three as they will add real value to your DAX tool belt. Until next time P3 Nation!

Download the Files!

Download the PBIX files

X

Get Your Files

Let’s block ads! (Why?)

PowerPivotPro

Tech Tip Thursday: Better use of colors in Power BI

Microsoft’s Guy in a Cube has been providing tips and tricks for Power BI and Business Intelligence on his YouTube channel since 2014. Occasionally on Thursdays we highlight a different helpful video from his collection.

In this video, Guy in a Cube has a special guest presenter: regular Power BI webinar host Chuck Sterling! Chuck talks about color themes, one of his favorite new features. He looks at how you can quickly create your own themes to use within Power BI, or browse the Themes Gallery in the community to get pre-created themes from others.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Nokia, China Huaxin Sign Final Agreement For Establishment Of Nokia Shanghai Bell

Nokia and China Huaxin Post and Telecommunication Economy Development Center signed a final agreement to integrate Alcatel-Lucent Shanghai Bell with Nokia China business to establish the new Nokia Shanghai Bell company.

After the announcement, this new joint venture will become Nokia’s main platform in China and it will continue to develop new technologies in sectors like IP routers, fiber, fixed networks, and 5G. At the same time, with the support of Nokia, Nokia Shanghai Bell will continue to seek opportunities in some overseas markets.

Alcatel-Lucent Shanghai Bell and Nokia China business have already been operating as one entity since they reached a temporary operating agreement in January 2016. The final agreement is expected to be completed in July 2017; however, it is still subject to administrative, legal, regulatory and other conditions.

Nokia will hold 50% plus one share of Nokia Shanghai Bell share, while China Huaxin will hold the remaining shares.

Let’s block ads! (Why?)

ChinaWirelessNews.com

NEW! Bloor Spotlight Paper: Big Data & the Mainframe, Issues and Opportunities

In this new white paper from Bloor, common issues around Big Data deployments are discussed, including strategies to resolving them, or even turning them into opportunities.

blog banner BloorWP NEW! Bloor Spotlight Paper: Big Data & the Mainframe, Issues and Opportunities

“More or less every major organisation in the world has a mainframe at the heart of its enterprise and it is critical that big data deployments are viewed from that perspective rather than treated as isolated efforts that are distinct from the mainframe environment.”
– Author Philip Howard

Discover the six issues involved in Big Data deployments – and how to resolve them – by downloadingBloor Spotlight: Big Data and The Mainframe, Issues and Opportunitiestoday!

Let’s block ads! (Why?)

Syncsort blog

Alibaba, China Telecom Seal New Deal For Network Services

Alibaba and China Telecom signed a comprehensive strategic cooperation agreement in Beijing that includes work on Internet of Things and payment services.

Under the agreement, the two parties will begin working together in various areas such as e-commerce, online security, marketing services, cloud computing, online payments, IoT, and corporate procurement services. Financial terms and expectations of returns of the deal were not released by either party.

The two parties will work together to enhance the corresponding cooperation between China Telecom’s five ecospheres, including smart access, smart home, new ICT applications, Internet finance, and Internet of Things, and Alibaba’s five “new” areas, including new retailing, new manufacturing, new finance, new technology, and new energy. In addition, they will further discuss and explore future in-depth deals.

China Telecom and Alibaba Group have already been working on a prior partnership and the two parties have implemented multi-level cooperation in data center services and mobile payments.

So far, Alibaba has signed cooperation agreements with all three major Chinese telecom operators. In November 2016, Alibaba inked a strategic cooperation framework agreement with China Unicom in Hangzhou; and in December 2016, the e-commerce group signed a strategic cooperation framework agreement with China Mobile in Beijing.

Let’s block ads! (Why?)

ChinaWirelessNews.com