• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Shared

Heads up: Shared and certified datasets are coming to Power BI

June 4, 2019   Self-Service BI

Organizations are increasingly seeking to build a data culture so they can leverage insights every day, at all levels of their organizations, across users with a variety of analytical skillsets. A key enabler for a data culture is the pervasive availability of standard, authoritative datasets that represent a single source of truth, allowing users to make decisions on trusted data, remix to create new insights, all with unified governance.

Get ready for the imminent release of Shared and Certified datasets in Power BI! In the coming days we will start rolling out the public preview of a set of capabilities across the Power BI service and Desktop to enable the full lifecycle of sharing datasets across organizations.

These capabilities deliver value to organizations in four key areas:

  • Dataset catalog. Users looking to find authoritative data in their organization can do so easily, with a new dataset catalog experience integrated in Power BI Desktop and the service. Users receive recommendations on the datasets available to them along with a search and browse capability that spans all data in Power BI. Shared datasets are also available in external tools, such as Excel and other third-party BI tools via the XMLA protocol, ensuring that your authoritative data is universally accessible.
  • Certification and promotion. To encourage the use of standardized datasets, IT administrators can mark datasets as certified when they are authoritative. Additionally, dataset owners can promote their datasets that are ready for further exploration by others, encouraging reuse.
  • Usage analytics. Power BI’s audit logs and Premium capacity metrics apps will show usage information atop shared datasets, allowing data owners to plan for growth and changes. Later this calendar year, we plan on releasing additional experiences to provide visibility into the use of shared datasets across your Power BI tenant.
  • Lineage tracking. Dataset owners in Power BI will be able to see the downstream use of their shared datasets by other users through the related content pane, allowing them to plan changes. Later this calendar year, we plan on releasing additional data lineage capabilities to make it even easier to visualize relationships between data artifacts in a workspace and across an organization.

Use Datasets across workspaces

With shared datasets in Power BI, we are allowing a single dataset to be used by multiple reports, across workspaces. We are introducing several new features that make use of this capability:

  • Build new reports based on datasets in different workspaces.
  • Copy existing reports across workspaces.
  • Make a personal copy of a report that is part of an app you have access to.

For data providers, this means that they only need to maintain a single copy of their dataset in their own separate workspace; at the same time, report authors will be able to use those datasets to build reports in their own workspaces without having to worry about maintaining the dataset. To help users discover shared datasets relevant to them, we are introducing a new dataset discovery experience in the Power BI service and Desktop that makes it easy to browse and search to find content. This new experience will be globally available in the service by the end of this week and will be part of the June release of Power BI Desktop.

Shared and Certified datasets as well as the ability to copy reports are available to anyone with a Pro license. Free users can connect to shared datasets that reside in Premium. For details, please see the documentation.

Certify datasets to establish authoritative data sources

With Certified Datasets we are providing organizations with a mechanism to distinguish their most valued and trusted datasets. Certified datasets show up prominently in the discovery experience such ensuring that users can easily find these authoritative sources for critical information. The ability to certify datasets can be tightly controlled and documented internally via a new admin control; this way, organizations can ensure that dataset certification is a selective process resulting in the establishment of truly reliable and authoritative datasets designed for use across the org.

Govern the use of shared datasets as needed for your organization

As part of this work, we are introducing new capabilities for dataset owners and tenant administrators to control the use of shared data.

For dataset owners, we are expanding the Power BI permission model to add a new “build” permission, separating out the permission to view pre-created reports on a dataset from those with “build”, which will allow the creation of new content, whether that is through the Power BI service (reports, Q&A), Desktop, Analyze in Excel, or third-party BI tools. During roll-out of shared datasets, the permission set of existing datasets will be migrated so that users currently assigned ‘Read’ will also get ‘Build’, to maintain the same level of access.

For tenant administrators, a new admin control called “Use datasets across workspaces” will allow you to restrict the group of users who can create reports atop of shared datasets and copy reports across workspaces.  This admin control will default to “Enabled for whole org” unless you had restrictions set up for the “Export data” admin control as of May 31, 2019. In this case these restrictions will be copied to the new admin control for the initial configuration.

Sharing Power BI datasets based on Analysis Services models

Sharing across workspaces works with all types of datasets, including imported, Direct Query, and live connect to Analysis Services. This means that there is now a very convenient way to make enterprise data models that are being housed in Analysis Services widely available: simply create a dataset connected to data model, upload it in the service and share it to analysts using the Build permission. This way users can find and use the data model easily and all governance and tracking is managed within Power BI.

Power BI also has a separate experience to discover and connect to Analysis Services models registered via the On-Premises Data Gateway (this feature is accessible via Get Data -> Databases in the Power BI service). Now that shared datasets is the unified dataset discovery experience in Power BI, we are deprecating and will remove the existing Analysis Services-specific experience on January 15, 2020. Data owners who rely on that experience for data discovery and data connection by their users should publish shared datasets from their Analysis Services models as described above.

Shared Datasets and the new Workspace experience

It is important to note that these new features are enabled only for the new Workspace experience, and not the classic Workspaces based on Office 365 Groups. Details about this restriction can be found in the documentation.

Learn more

● Check out the documentation which provides a lot more details about Shared and Certified Datasets, including current limitations.

● If you’re attending the Microsoft Business Application Summit next week in Atlanta, come to the “Microsoft Power BI: Enterprise reporting” to learn more about shared datasets.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

Lyft says Shared Saver is its ‘most affordable’ ride option yet

February 21, 2019   Big Data

Lyft today announced what it says is its “most affordable” ride option yet: Shared Saver. Starting this week in select cities — Denver and San Jose for now, with others to follow — riders can lock in low prices even during peak hours. Unlike Lyft’s standard Shared rides, Shared Saver isn’t affected by surge pricing.

So how does it work? Well, unlike a standard Shared ride, your Lyft driver won’t necessarily come to you. After a few minutes, you’ll be directed to a pickup spot that’s “a quick walk” (at most a few blocks, Lyft says) from your location. There you’ll meet your driver and fellow riders. Similarly, the drop-off location will be “a short walk” from your intended destination.

Lyft recently redesigned its app to promote Shared rides, chiefly by making it easier to compare prices of solo versus Shared rides and by notifying solo riders when there’s a Shared ride heading their direction that doesn’t include detours. Last year, Lyft VP of Government Relations Joseph Okpaku told TechCrunch that about 35 percent of Lyft rides are shared among passengers and that the goal is to reach 50 percent shared rides by 2020.

 Lyft says Shared Saver is its ‘most affordable’ ride option yet

Shared Saver’s debut follows on the heels of Uber’s Express Pool, which directs riders to pickup points within two blocks of their origin and drops them off within two blocks of their destination. Uber claims it’s up to 50 percent cheaper than UberPool, Uber’s alternative ride-splitting option, and up to 75 percent cheaper than UberX. (Lyft didn’t provide a comparative metric for Shared Saver.)

The news also comes as Lyft gears up for an initial public offering. In December, the company, which was last valued at $ 15 billion, beat Uber to the punch in filing for an IPO with the Securities and Exchange Commission. According to Reuters sources, Lyft’s IPO is slated for the first half of 2019.

Lyft announced in September of last year that it had surpassed a billion rides in the nearly seven years since its founding, doubling the number of rides it delivered in less than 12 months. And it recently claimed it has 35 percent market share in the U.S., up from 20 percent 18 months earlier. (For context, rival Uber announced it had arrived at 10 billion rides back in June.) The global ride-hailing market is expected to grow to $ 285 billion by 2030, according to analysts at Goldman Sachs.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

August 18, 2017   BI News and Info

By Amitabh Tamhane

Goals: This topic provides an overview of providing persistent storage for containers with data volumes backed by Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D) and SMB Global Mapping.

Applicable OS releases: Windows Server 2016, Windows Server RS3

Prerequisites:

Blog:

With Windows Server 2016, many new infrastructure and application workload features were added that deliver significant value to our customers today. Amongst this long list, two very distinct features that were added: Windows Containers & Storage Spaces Direct!

1.   Quick Introductions

Let’s review a few technologies that have evolved independently. Together these technologies provide a platform for persistent data store for applications when running inside containers.

1.1         Containers

In the cloud-first world, our industry is going through a fundamental change in how applications are being developed & deployed. New applications are optimized for cloud scale, portability & deployment agility. Existing applications are also transitioning to containers to achieve deployment agility.

Containers provide a virtualized operating system environment where an application can safely & independently run without being aware of other applications running on the same host. With applications running inside containers, customers benefit from the ease of deployment, ability to scale up/down and save costs by better resource utilization.

More about Windows Containers.

1.2         Cluster Shared Volumes

Cluster Shared Volumes (CSV) provides a multi-host read/write file system access to a shared disk. Applications can read/write to the same shared data from any node of the Failover Cluster. The shared block volume can be provided by various storage technologies like Storage Spaces Direct (more about it below), Traditional SANs, or iSCSI Target etc.

More about Cluster Shared Volumes (CSV).

1.3         Storage Spaces Direct

Storage Spaces Direct (S2D) enables highly available & scalable replicated storage amongst nodes by providing an easy way to pool locally attached storage across multiple nodes.

Create a virtual disk on top of this single storage pool & any node in the cluster can access this virtual disk. CSV (discussed above) seamlessly integrates with this virtual disk to provide read/write shared storage access for any application deployed on the cluster nodes.

S2D works seamlessly when configured on physical servers or any set of virtual machines. Simply attach data disks to your VMs and configure S2D to get shared storage for your applications. In Azure, S2D can also be configured on Azure VMs that have premium data disks attached for faster performance.

More about Storage Spaces Direct (S2D). S2D Overview Video.

1.4         Container Data Volumes

With containers, any persistent data needed by the application running inside will need to be stored outside of the container or its image. This persistent data can be some shared read-only config state or read-only cached web-pages, or individual instance data (ex: replica of a database) or shared read-write state. A single containerized application instance can access this data from any container host in the fabric or multiple application containers can access this shared state from multiple container hosts.

With Data Volumes, a folder inside the container is mapped to another folder on the container host using local or remote storage. Using data volumes, application running inside containers access its persistent data while not being aware of the infrastructure storage topology. Application developer can simply assume a well-known directory/path to have the persistent data needed by the application. This enables the same container application to run on various deployment infrastructures.

2.   Better Together: Persistent Store for Container Fabric

This data volume functionality is great but what if a container orchestrator decides to place the application container to a different node? The persistent data needs to be available on all nodes where the container may run. These technologies together can provide a seamless way to provide persistent store for container fabric.

2.1         Data Volumes with CSV + S2D

Using S2D, you can leverage locally attached storage disks to form a single pool of storage across nodes. After the single pool of storage is created, simply create a new virtual disk, and it automatically gets added as a new Cluster Shared Volume (CSV). Once configured, this CSV volume gives you read/write access to the container persistent data shared across all nodes in your cluster.

With Windows Server 2016 (plus latest updates), we now have enabled support for mapping container data volumes on top of Cluster Shared Volumes (CSV) backed by S2D shared volumes. This gives application container access to its persistent data no matter which node the container orchestrator places the container instance.

Configuration Steps

Consider this example (assumes you have Docker & container orchestrator of your choice already installed):

  1. Create a cluster (in this example 4-node cluster)

New-Cluster -Name -Node

C01 Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

(Note: The generic warning text above is referring to the quorum witness configuration which you can add later.)

  1. Enable Cluster S2D Functionality

Enable-ClusterStorageSpacesDirect or Enable-ClusterS2D

C02 Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

(Note: To get the optimal performance from your shared storage, it is recommended to have SSD cache disks. It is not a must have for getting a shared volume created from locally attached storage.)

Verify single storage pool is now configured:

Get-StoragePool S2D*

C03 Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

  1. Create new virtual disk + CSV on top of S2D:

New-Volume -StoragePoolFriendlyName *S2D* -FriendlyName -FileSystem CSVFS_REFS -Size 50GB

 C04 Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

Verify new CSV volume getting created:

Get-ClusterSharedVolume

C05 Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

This shared path is now accessible on all nodes in your cluster:

C06 Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

  1. Create a folder on this volume & write some data:

C07 Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

  1. Start a container with data volume linked to the shared path above:

This assumes you have installed Docker & able to run containers. Start a container with data volume:

docker run -it –name demo -v C:\ClusterStorage\Volume1\ContainerData:G:\AppData nanoserver cmd.exe

C08 Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

Once started the application inside this container will have access to “G:\AppData” which will be shared across multiple nodes. Multiple containers started with this syntax can get read/write access to this shared data.

Inside the container, G:\AppData1 will then be mapped to the CSV volume’s “ContainerData” folder. Any data stored on “C:\ClusterStorage\Volume1\ContainerData” will then be accessible to the application running inside the container.

2.2         Data Volumes with SMB Global Mapping (Available in Windows Server RS3 Only)

Now what if the container fabric needs to scale independently of the storage cluster? Typically, this is possible through SMB share remote access. With containers, wouldn’t it be great to support container data volumes mapped to a remote SMB share?

In Windows Server RS3, there is a new support for SMB Global Mapping which allows a remote SMB Share to be mapped to a drive letter. This mapped drive is then accessible to all users on the local host. This is required to enable container I/O on the data volume to traverse the remote mount point.

With Scaleout File Server, created on top of the S2D cluster, the same CSV data folder can be made accessible via SMB share. This remote SMB share can then be mapped locally on a container host, using the new SMB Global Mapping PowerShell.

Caution: When using SMB global mapping for containers, all users on the container host can access the remote share. Any application running on the container host will also have access to the mapped remote share.

Configuration Steps

Consider this example (assumes you have Docker & container orchestrator of your choice already installed):

  1. On the container host, globally map the remote SMB share:

$ creds = Get-Credentials

New-SmbGlobalMapping -RemotePath \contosofileserver\share1 -Credential $ creds -LocalPath G:

This command will use the credentials to authenticate with the remote SMB server. Then, map the remote share path to G: drive letter (can be any other available drive letter). Containers created on this container host can now have their data volumes mapped to a path on the G: drive.

C09 Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

  1. Create containers with data volumes mapped to local path where the SMB share is globally mapped.

C10 Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

Inside the container, G:\AppData1 will then be mapped to the remote share’s “ContainerData” folder. Any data stored on globally mapped remote share will then be accessible to the application running inside the container. Multiple containers started with this syntax can get read/write access to this shared data.

This SMB global mapping support is SMB client-side feature which can work on top of any compatible SMB server including:

  • Scaleout File Server on top of S2D or Traditional SAN
  • Azure Files (SMB share)
  • Traditional File Server
  • 3rd party implementation of SMB protocol (ex: NAS appliances)

Caution: SMB global mapping does not support DFS, DFSN, DFSR shares in Windows Server RS3.

2.3 Data Volumes with CSV + Traditional SANs (iSCSI, FCoE block devices)

In Windows Server 2016, container data volumes are now supported on top of Cluster Shared Volumes (CSV). Given that CSV already works with most traditional block storage devices (iSCSI, FCoE). With container data volumes mapped to CSV, enables reusing existing storage topology for your container persistent storage needs.

Let’s block ads! (Why?)

Clustering and High-Availability

Read More

Evaluating Shared Expressions in Tabular 1400 Models

December 31, 2016   Self-Service BI

In our December blog post; Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services, we mentioned SSDT Tabular does not yet support shared expressions, but the CTP 1.1 Analysis Services engine already does. So, how can you get started using this exciting new enhancement to Tabular models now? Let’s take a look.

With shared expressions, you can encapsulate complex or frequently used logic through parameters, functions, or queries. A classic example is a table with numerous partitions. Instead of duplicating a source query with minor modifications in the WHERE clause for each partition, the modern Get Data experience lets you define the query once as a shared expression and then use it in each partition. If you need to modify the source query later, you only need to change the shared expression and all partitions that refer to it to automatically pick up the changes.

In a forthcoming SSDT Tabular release, you’ll find an Expressions node in Tabular Model Explorer which will contain all your shared expressions. However, if you want to evaluate this capability now, you’ll have to create your shared expressions programmatically. Here’s how:

  1. Create a Tabular 1400 Model by using the December release of SSDT 17.0 RC2 for SQL Server vNext CTP 1.1 Analysis Services. Remember that this is an early preview. Only install the Analysis Services, but not the Reporting Services and Integration Services components. Don’t use this version in a production environment. Install fresh. Don’t attempt to upgrade from previous SSDT versions. Only work with Tabular 1400 models using this preview version. For Multidimensional as well as Tabular 1100, 1103, and 1200 models, use SSDT version 16.5.
  2. Modify the Model.bim file from your Tabular 1400 project by using the Tabular Object Model (TOM). Apply your changes programmatically and then serialize the changes back into the Model.bim file.
  3. Process the model in the preview version of SSDT Tabular. Just keep in-mind that SSDT Tabular doesn’t know yet how to deal with shared expressions, so don’t attempt to modify the source query of a table or partition that relies on a shared expression as SSDT Tabular may become unresponsive.

Let’s go through these steps in greater detail by converting the source query of a presumably large table into a shared query, and then defining multiple partitions based on this shared query. As an optional step, afterwards you can modify the shared query and evaluate the effects of the changes across all partitions. For your reference, download the Shared Expression Code Sample.

If you want to follow the explanations on your own workstation, create a new Tabular 1400 model as explained in Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services. Connect to an instance of the AdventureWorksDW database, and import among others the FactInternetSales table. A simple source query suffices, as in the following screenshot.

FactInternetSalesSourceQuery Evaluating Shared Expressions in Tabular 1400 Models

As you’re going to modify the Model.bim file of a Tabular project outside of SSDT, make sure you close the Tabular project at this point. Then start Visual Studio, create a new Console Application project, and add references to the TOM libraries as explained under “Working with Tabular 1400 models programmatically” in Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services.

The first task is to deserialize the Model.bim file into an offline database object. The following code snippet gets this done (you might have to update the bimFilePath variable). Of course, you can have a more elaborate implementation using OpenFileDialog and error handling, but that’s not the focus of this article.

string bimFilePath = @”C:\Users\Administrator\Documents\Visual Studio 2015\Projects\TabularProject1\TabularProject1\Model.bim”;
var tabularDB = TOM.JsonSerializer.DeserializeDatabase(File.ReadAllText(bimFilePath));

The next task is to add a shared expression to the model, as the following code snippet demonstrates. Again, this is a bare-bones minimum implementation. The code will fail if an expression named SharedQuery already exists. You could check for its existence by using: if(tabularDB.Model.Expressions.Contains(“SharedQuery”)) and skip the creation if it does.

tabularDB.Model.Expressions.Add(new TOM.NamedExpression()
{
    Kind = TOM.ExpressionKind.M,
    Name = “SharedQuery”,
    Description = “A shared query for the FactInternetSales Table”,
    Expression = “let”
        +  “    Source = AS_AdventureWorksDW,”
        +  “    dbo_FactInternetSales = Source{[Schema=\”dbo\”,Item=\”FactInternetSales\”]}[Data]”
        +  “in”
        +  “    dbo_FactInternetSales”,
});

Perhaps the most involved task is to remove the existing partition from the target (FactInternetSales) table and create the desired number of new partitions based on the shared expression. The following code sample creates 10 partitions and uses the Table.Range function to split the shared expression into chunks of up to 10,000 rows. This is a simple way to slice the source data. Typically, you would partition based on the values from a date column or other criteria.

tabularDB.Model.Tables[“FactInternetSales”].Partitions.Clear();
for(int i = 0; i < 10; i++)
{
    tabularDB.Model.Tables[“FactInternetSales”].Partitions.Add(new TOM.Partition()
    {
        Name = string.Format(“FactInternetSalesP{0}”, i),
        Source = new TOM.MPartitionSource()
        {
            Expression = string.Format(“Table.Range(SharedQuery,{0},{1})”, i*10000, 10000),
        }
    });
}

The final step is to serialize the resulting Tabular database object with all the modifications back into the Model.bim file, as the following line of code demonstrates.

File.WriteAllText(bimFilePath, TOM.JsonSerializer.SerializeDatabase(tabularDB));

Having serialized the changes back into the Model.bim file, you can open the Tabular project again in SSDT. In Tabular Model Explorer, expand Tables, FactInternetSales, and Partitions, and verify that 10 partitions exist, as illustrated in the following screenshot. Verify that SSDT can process the table by opening the Model menu, pointing to Process, and then clicking Process Table.

ProcessTable 1024x673 Evaluating Shared Expressions in Tabular 1400 Models

You can also verify the query expression for each partition in Partition Manager. Just remember, however, that you must click the Cancel button to close the Partition Manager window. Do not click OK –   with the December 2016 preview release, SSDT could become unresponsive.

Congratulations! Your FactInternetSales now effectively uses a centralized source query shared across all partitions. You can now modify the source query without having to update each individual partition. For example, you might decide to remove the ‘SO’ part from the values in the SalesOrderNumber column to get the order number in numeric form. The following screenshot shows the modified source query in the Advanced Editor window.

ModifiedQuery 1024x390 Evaluating Shared Expressions in Tabular 1400 Models

Of course, you cannot edit the shared query in SSDT yet. But you could import the FactInternetSales table a second time and then edit the source query on that table. When you achieve the desired result, copy the M script into your TOM application to modify the shared expression accordingly. The following lines of code correspond to the screenshot above.

tabularDB.Model.Expressions[“SharedQuery”].Expression = “let”
    + “    Source = AS_AdventureWorksDW,”
    + “    dbo_FactInternetSales = Source{[Schema=\”dbo\”,Item=\”FactInternetSales\”]}[Data],”
    + “    #\”Split Column by Position\” = Table.SplitColumn(dbo_FactInternetSales,\”SalesOrderNumber\”,Splitter.SplitTextByPositions({0, 2}, false),{\”SalesOrderNumber.1\”, \”SalesOrderNumber\”}),”
    + “    #\”Changed Type\” = Table.TransformColumnTypes(#\”Split Column by Position\”,{{\”SalesOrderNumber.1\”, type text}, {\”SalesOrderNumber\”, Int64.Type}}),”
    + “    #\”Removed Columns\” = Table.RemoveColumns(#\”Changed Type\”,{\”SalesOrderNumber.1\”})”
    + “in”
    + “    #\”Removed Columns\””;

One final note of caution: If you remove columns in your shared expression that already exist on the table, make sure you also remove these columns from the table’s Columns collection to bring the table back into a consistent state.

That’s about it on shared expressions for now. Hopefully in the not-so-distant future, you’ll be able to create shared parameters, functions, and queries directly in SSDT Tabular. Stay tuned for more updates on the modern Get Data experience. And, as always, please send us your feedback via the SSASPrev email alias here at Microsoft.com or use any other available communication channels such as UserVoice or MSDN forums. You can influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers.

This article passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.
Recommended article: The Guardian’s Summary of Julian Assange’s Interview Went Viral and Was Completely False.

Analysis Services Team Blog

Read More

Snowden says Petraeus shared ‘far more highly classified than I ever did’

December 4, 2016   DWH News and Info

snowden hero Snowden says Petraeus shared ‘far more highly classified than I ever did’Image: CBS News

Edward Snowden blasted the US justice department in an interview with Yahoo News on Sunday, saying “we have a two-tiered system of justice in the United States” that allows the well connected to get off with light punishments.

Snowden, a fugitive and former NSA contractor who revealed the organization’s worldwide spying powers in 2013, pointed to the case of former CIA Director Gen. David Petraeus as evidence.

“Perhaps the best-known case in recent history here is General Petraeus who shared information that was far more highly classified than I ever did with journalists,” Snowden told Katie Couric, global news anchor at Yahoo. “And he shared this information not with the public for their benefit, but with his biographer and lover for personal benefit conversations that had information, detailed information, about military special access programs that’s classified above Top Secret, conversations with the president, and so on.”

Couric traveled to Moscow for the face-to-face interview, where Snowden remains in exile. The full interview will be available to view Monday on YouTube.

Couric asked Snowden what plea bargain he might accept. He cited uncertainty, as “no charges are ever brought, or they’re brought very minimally” against others involved in the government or intelligence community. Snowden is facing much more.

“When the government came after [Petraeus], they charged him with a misdemeanor,” Snowden said. “He never spent a single day in jail, despite the type of classified information he exposed…We have a two-tiered system of justice in the United States where people who are either well connected to government or they have access to an incredible amount of resources, get very light punishments.”

Gen. Petraeus, reportedly a secretary of state candidate under President-elect Donald Trump, apologized for his “mistake” on ABC’s “This Week.”

“Five years ago, I made a serious mistake. I acknowledged it, I apologized for it, I paid a very heavy price for it, and I’ve learned from it,” Petraeus said Sunday.

Let’s block ads! (Why?)

Colbran South Africa

Read More

How consultants can work smarter with shared dashboards

July 29, 2016   Self-Service BI

Whether you’re an independent consultant or part of a larger firm, there’s one skill that’s critical for every consultant: communication skills. That’s where tools like Power BI’s shared dashboards can help! Using shared dashboards to communicate updates can save time and money, keep stakeholders happy, and leave consultants to spend their time focusing on the more important (and interesting) parts of their work.

In the Power BI service, dashboards can be shared for free with both internal and external audiences, which makes them well suited to the needs of consultants. A dashboard link is emailed directly to recipients, who can then just log into their Power BI account to see what was shared. Shared dashboards are read-only, but can be cross filtered, sliced, and queried. Row-Level Security settings apply to shared dashboards as well, so consultants can be granular when deciding who gets to see what pieces of information.

Sharing dashboards outside of your organization has a number of useful applications, no matter what kind of project you’re working on.

For example, imagine that you’re a data analyst consultant who was contracted to create a dashboard that monitors online product sales for a local company. You would collect and analyze the data, create a report, publish key elements as a dashboard, and then share it with the Sales Manager who hired you for the project.

The read-only status of the dashboard then ensures that you maintain control of your work. There’s no tampering with your data, no sharing beyond the originally defined scope, and the open possibility for more billable hours in the future in the form of updates.

Consulting agencies can also benefit from shared dashboards. For example, sharing dashboards would be helpful as a marketing agency that has been contracted to manage a third party’s direct mail campaigns. As with the first example, you would collect and analyze data from the results of your campaigns, create a report, publish a dashboard, and then share it with the Director of Marketing for the firm that contracted you.

This shared dashboard becomes an easy and quick way for all stakeholders, across organizations, to monitor the status and effectiveness of the campaigns. It helps foster a positive working relationship where no one feels left out of the loop, and provides a virtual “paper trail” for your data-driven marketing decisions.

Consultants already have enough on their plate balancing the expectations of clients with the reality of project, but thanks to tools like Power BI’s shared dashboards it’s easier than ever to share and sync real-time information on data analysis, status updates, and more, in a way that encourages finding insights while protecting your hard work.

Do you have your own time-saving tips for analyst consultants? Share them in our Community!

Get started sharing your dashboards with external audiences:

1. Open the dashboard and select Share. 15d1e93c 1cf1 4b0c adc1 e989fb36e4fe How consultants can work smarter with shared dashboards

2. Select Invite, and type their email address(es) in the top box. Include a message if you like.
306fc4d2 9161 46c1 8b8d fc5f6ff1dd8a How consultants can work smarter with shared dashboards
You’ll see a warning for addresses outside of your organization, but you can still share with them as usual.

6969f83f f182 4b2e b6cf a0c91d88fd3d How consultants can work smarter with shared dashboards

3. Select Share.

Your recipient will get an email with a link to the dashboard. If they have not yet created a Power BI account, they will be prompted to do so after clicking the link.

To see who has access to your dashboard, select Access.
133c3455 6671 4926 89fe 317ce830ad1a How consultants can work smarter with shared dashboards

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

How to Write Headlines That Get Shared And Drive Traffic

April 14, 2016   CRM News and Info
headlines How to Write Headlines That Get Shared And Drive Traffic

Headlines make the blog post. Flub your headline, and even if the rest of the post is great, you’ll see far fewer results from your efforts.

Posts with weak headlines get drastically fewer shares, fewer clickthroughs, fewer readers. And while all that might sound like a dread warning, there’s a sunny upside here: Get your headline right, and you’re halfway to success.

It’s just a few little words – how hard could that be? Well, good headlines don’t have to be hard to write, but every extra minute you can put into making them great will pay off. That’s why old-school copywriters – of the postal mail era – spent half their writing time on their headlines.

As David Ogilvy said, “On the average, five times as many people read the headline as read the body copy. When you have written your headline, you have spent eighty cents out of your dollar.”

Ogilvy might even have underestimated how valuable headlines are. In the world of social sharing, your headline may be the only part of your post that people read. That’s because most of us aren’t reading the articles we’re sharing. We just see the headline, the source, and maybe a catchy image. And we share.

This idea of sharing without reading generated a bit of a storm a few years ago. It started when Chartbeat’s CEO Tony Haile posted this tweet in response to a scuffle over Upworthy’s “curiosity gap” style headlines.

Several other sources immediately chimed in that they had seen a similar trend.

This sharing-without-reading habit makes headlines even more critical. That old adage about people judging a book by its cover has only become more true. Except now, more and more people aren’t even opening the book. Ever. They’re recommending it based on the jacket.

I doubt you need any more convincing about how important headlines are. You get it. They can make or break a blog post, an eBook, a webinar – you name the content format.

So how do you get them right?

Well, there is no perfect system for crafting a killer headline. If there was, we’d all be using it … and we might all be using the same headlines. But there are some tricks of the trade. I can’t guarantee miracles, but these techniques will put you ahead of the pack.

1. Write 25 headlines for every one you need.

This is advice from the king of viral content, Upworthy. They have a fantastic SlideShare titled “How to Make That One Thing Go Viral.” It’s the single best headline resource I’ve ever come across, so I’m including it here.

This SlideShare hammers home a number of content creation and promotion principles, but the two major ones are:

  • Good luck with trying to get something to go viral. Even the likes of Upworthy has only a .3% success rate for truly viral content.
  • Write 25 headlines. No, really – 25 headlines. No excuses.

Very few content creators ever write 25 headlines for their content. We should, but … it just seems so darn hard. Even I have to admit that I’m lazy – I only write 6-10 versions of each headline I use.

But for your edification (and mine) let’s run an experiment. Here’s how long it took to write 25 headlines:

  1. How to Write Better Headlines
  2. Want More Shares? Write Better Headlines
  3. 10 Headline Hacks for Dramatically More Shares
  4. Time-Tested Headline Secrets from Master Copywriters 1 minute
  5. 10 Tricks for Better Headlines
  6. 7 Easy Ways to Write Headlines That Get More Clicks and Shares
  7. What Every Content Marketer Needs To Know About Writing Headlines
  8. Data-Based Tips for More Effective Headlines 2 minutes
  9. What Your Readers Wish You Knew About Writing Headlines
  10. How to Write Headlines That Get More Clicks and Shares
  11. 7 Easy Ways to Write Better Headlines, Faster
  12. Want an Edge for Your Content? Write Better Headlines 3 minutes
  13. Why Your Headline is 5 Times More Important Than The Rest of Your Content
  14. Simple Tricks to Write Headlines That Triple Your Results
  15. Headline Hacks For More Effective Content 4 minutes
  16. 10 Tricks to Write Better Headlines Based on Recent Research
  17. New Research on How to Write Better Headlines
  18. 7 Ways to Improve the Single Most Important Aspect of Any Content 5 minutes
  19. Headlines Make the Content: How to Write More Effective Headlines
  20. How to Write Killer Headlines
  21. 10 Easy Ways to Write Irresistible Headlines 6 minutes
  22. The Scientifically Savvy Way to Write Irresistible Headlines
  23. If You Only Get One Part of Your Content Right, Make it the Headline
  24. Headlines are 5 Times More Important Than Any Other Part of Your Content 7 minutes
  25. 80% of Content Marketing Success Rests in the Headline 7 minutes 20 seconds

There you have it: You can write 25 headlines in eight minutes or less.

Your headline list may have some obvious winners and some obvious dogs. But I’d still run every one of these through two of my favorite headline analyzers. They’re CoSchedule’s Headline Analyzer and the Advanced Marketing Institute’s Emotional Marketing Value Headline Analyzer.

Here are the scores each one of those headlines got from each tool:

Now, let’s do some explainin’ about what all the numbers mean. First, CoSchedule: The number there is scored from 1 to 100. That score reflects how long the headline is, if it’s associated with more or fewer shares, and several other attributes. Anything over a 70 is considered very good. If you can clear 80 I’d say you’ve found a seriously strong headline. The grade score after the number refers to “Word Balance”, “An analysis of the overall structure, grammar, and readability of your headline.”

The Advanced Marketing Institute’s tool works differently. It rates headlines based on which industry the headline belongs to. Then it sorts headline types by whether they’re Intellectual, Empathetic or Spiritual.

Of the two tools, I prefer CoSchedule’s. Just don’t take what it tells you as gospel. These are just tools. They are helpful for picking headlines, but they are really just educated guesses. The only way to tell what’s actually going to work is to either go ahead and publish your content, or try to test the headline before you publish.

2. Test.

Another thing the old-time copywriters knew: If you can test only one thing, test the headline. This example from Upworthy demonstrates the potentially epic power of a headline test:

Who else wants 59 times more shares from their content?

If you’re willing to test your headlines after a post has been published, here are several WordPress plugins that make it pretty easy to do:

Of course, none of those will help you test before you publish. Which means all the promotion you do in the first days after publication will be using an untested headline. This is no good, because – as you know – the bulk of the attention your post will get is in those first few days.

Drat. Now what?

I might have a solution. I’ve been playing around with pre-testing headlines in Facebook. It’s a flawed system, but here’s how it works:

  • Find an existing blog post that’s closely related to the topic of your new post.

This will be the link you’ll use in your Facebook ad. Ideally, you’d be pointing traffic to a page on your site. But if there isn’t a similar blog post, point it to another site in your niche. You want something close enough that the Facebook ad reviewers won’t disapprove your ad because you’re sending traffic to an unrelated page.

  • Make a “Clicks to Website” type of ad. Have one version of the ad use “Headline A” that you want to test. Create another duplicate ad for “Headline B”.
  • Select an audience for these ads that closely resembles the audience you want to attract.
  • Start the ads. Watch how they perform over the next few days. Make sure you pick a winner that’s statistically valid. A simple test calculator like Perry Marshall’s split-tester will do.

Here’s what my ads dashboard looked like for a short test I ran last month. These aren’t statistically valid results, but this shows what your tests would look like.

It will probably cost you about $ 20 to test three headlines against each other. It will also add quite a bit of time to your headline writing, and to your content creation. However – what’s it worth to you to find out which headline gets double or triple the clicks?

3. Use Numbers.

Most of the time, when you’ve got a number in a headline, it means you’ve got a “list post,” aka a “listicle.” A typical listicle headline would be “10 Ways to Get More Shares.” This article format is used far and wide online. It’s also been dissed as a shallow way to express ideas.

Shallow or not, listicles work. Look at any list of “top articles on this site” and you’ll see at least a few listicles. Often, the entire roster of top articles will be listicles.

Why do they work? Several reasons:

  • Listicles are scannable. Most people online are scanning, not reading.
  • Numbers are specific. People want to know what they’re going to get before they click through to a page.
  • Listicles frame the information well. They make the information seem more manageable or “digestible.”

There are many studies showing that listicles outperform other content formats – and other headline types. Here’s one from Noah Kagan’s site:

Let’s block ads! (Why?)

Act-On Marketing Blog

Read More

A Call For A Shared Connectivity Blueprint For Cities

February 23, 2016   BI News and Info

Unless you frequent the personal hygiene shelves at the supermarket, you’re unlikely to have heard of Proctor & Gamble’s Crest Spinbrush. Yet the product is something of a milestone in the development of fast-moving consumer goods.

Battery-powered, the product is advertised to move bristles 20 times faster than a manually-powered brush, but probably its most interesting feature is that, unlike most of the goods developed by P&G over the years, it was the result of collaboration with individual inventors.

Intelligent digital ‘networks of networks’ are fundamentally changing the way commerce can be managed, optimised, shared, and deployed

Suddenly, just after the turn of the millennium, P&G had a change of heart. At that time less than 10 percent of the company’s new products were the result of external collaboration. But few, if any, markets move faster than FMCG (fast-moving consumer goods) and the company was concerned it wasn’t innovating rapidly enough. The management, fearing competitors might launch new products that could disrupt its markets, embarked on a daring experiment. Instead of trying to create everything in-house, as would a vertically integrated company, P&G set a target of increasing the percentage of products delivered in partnership with others to over half. In short, by five times.

New thinking, new products

The results of what amounts to a reversal of a long-held strategy have been spectacular. Within a few years the Crest Spinbrush was followed by Olay Regenerist creams in collaboration with chemical suppliers, by a line of probiotic supplements (with university spinouts), and in a particularly dramatic wrench with the past, by Glad’s Press’n Seal plastic wrap that was developed with competitors such as The Clorox Company.

Dubbed ‘Connect and Develop’, this collaboration – or external partnering – took the consumer giant even further than it had planned. By 2008, more than half of its products were being worked up with the help of what would once have been described as outsiders and collaboration is now a fundamental part of its business.

In one sense, P&G’s turnaround was the result of a certain humility. The company recognised that it didn’t know everything and couldn’t do everything. There were a lot of smart – and perhaps smarter – people out there and the conclusion was it should engage with them. Now that P&G is bringing to market products that were once beyond its areas of expertise, the collaborative network has reduced risk. New products are hitting the market faster, quality has improved, and potential competitors have become partners.

digital planet 04 image1 A Call For A Shared Connectivity Blueprint For CitiesToday P&G has built up a network of outside collaborators that, between them, aim to add $ 3bn a year to the company’s annual sales growth. In short, even such a cutthroat business as FMCG doesn’t have to be war.

P&G was ahead of its time. Few were comfortable with ‘open-sourced’ strategies, even though advances in networking, cloud computing, social media and mobile technologies made them possible. Between them, these transformation technologies have given companies the opportunity to connect, communicate, and collaborate with important external elements of their value chains in ways that were simply not possible before.

Thus, we’re witnessing the era of electronic trading networks that facilitate much richer collaboration with all stakeholders – customers, suppliers, banks, other trading partners, even rivals. As McKinsey’s David Edelman, Principal at the firm’s Boston office and co-leader of the global digital marketing strategy group, explained: “Those companies that partner effectively and securely can bring innovative products to market more quickly, boost efficiency, improve visibility, increase agility, and reduce risk.”

Barriers to success

Companies face two main barriers though, as SAP explained. One is psychological, the other technological. The psychological barrier comes from the fact that corporate cultures have to be dismantled. People may hesitate to share information and resources outside the company for fear of losing status and control. And some of these concerns are justified.

When the business network extends beyond a company’s four walls, explained SAP, the potential security risks multiply. But solutions are emerging all the time, such as the so-called ‘zero trust’ model; a data-centric approach that would still enable an ecosystem of partners, contractors, suppliers, and customers to connect creatively with each other.

And then there’s the problem of conflicting technologies. Highly customised legacy systems and the wide variety of technology providers, each with their own carefully protected intellectual property, have always made it difficult to share even standard data. But just as companies have lately shown a willingness to forgo customisation and control in exchange for the convenience of ‘software as a service’ and cloud technologies, they’ll be more willing to embrace the standardised offerings that will enable increased data and intelligence sharing through business networks.

Nobody’s underrating the importance of cyber security either. By implication, collaborative networks increase the volume of sensitive commercial data that is collected, while procurement decisions can create the risk that vendors will treat sensitive intellectual property with less care than required. But as nations, albeit belatedly, begin to cooperate on the menace of cyber attacks, the risks of such attacks are likely to be reduced.

Collaborative networks – or digital supply chains, if you like – are also based on one obvious fact: you can’t keep banging your head against a wall for too long. Explained Bill He, Vice President of Global Strategic Sourcing for paper giant Kimberly-Clark: “The low-hanging fruit [in supply chains] is gone. You can only reduce procurement costs by 10 percent a year for so long.” And ultimately that tactic will rebound on the procurer, warn management consultants, because suppliers will start cutting corners to maintain their margins.

Worse, it also prevents the development of mutually rewarding relationships because it prevents companies and suppliers from establishing a more mutually beneficial relationship. As McKinsey said, in standard procurement deals, one company sends out a request for proposal, gets the proposals, picks a winner, and negotiates a deal. But, explained He, this process only reveals a small fraction of what the purchasing company really needs and about the same amount of what the supplier could actually provide. Thus, both purchaser and supplier miss out on a lot of knowledge they could both use.

However, the technology must first be up to the task, with networks allowing companies to transact quickly, collaborate in real time, and access information from their network of partners when and where they need it. As this starts to happen, we’re entering an era of ‘knowledge- based sourcing’, a collaborative approach that allows suppliers and customers to share much more information up front to jointly identify opportunities that will deliver benefits for both parties, whether it’s three months from today or five years from now. “Knowledge-based sourcing is the future of the business network”, concluded he.

Part of the series: Our Digital Planet: Data-Driven Business Frameworks Are the Future. In a Hyperconnected World, the Collaborator Is King

Read other articles in this series:

The Democracy of Collaborative Networks

The Rise of the Digital Worker

A Digital First World

See it, Click it, Buy it

A More Intelligent Workplace

Download the full PDF

Comments

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Digitalist Magazine

Read More

“was completed via an internal blockchain, the shared database technology that gained notoriety as…”

November 23, 2015   Big Data

was completed via an internal blockchain, the shared database technology that gained notoriety as the platform for the crypto currency bitcoin. Banks are now racing to harness the power of the blockchain technology, in a belief that it could cut up to $ 20bn off costs and transform the way the industry works.

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Privacy, Big Data, Human Futures by Gerd Leonhard

Read More

How To Write an E-Book that Gets Read and Shared

November 3, 2015   InfusionSoft

When did e-books become a content marketing staple? It’s tough to say when, exactly, but they are just that—a staple for your content marketing efforts.

But e-books are also incredibly daunting, especially if you’ve never written one before. Just the sound of the project is intimidating.

There are also other fears—what if you spend your precious time and resources creating this masterpiece and no one reads it and it generates zero leads?

Worry no more. Here’s a step-by-step guide for creating your next e-book and ensuring it gets read and shared:

When did e-books become a content marketing staple? It’s tough to say when, exactly, but they are just that—a staple for your content marketing efforts.

But e-books are also incredibly daunting, especially if you’ve never written one before. Just the sound of the project is intimidating.

There are also other fears—what if you spend your precious time and resources creating this masterpiece and no one reads it and it generates zero leads?

Worry no more. Here’s a step-by-step guide for creating your next e-book and ensuring it gets read and shared:

1. Choose a topic

There are three criterion to keep in mind as you set out to choose a topic for your e-book.

• Relevance: Does it make sense for your business to be publishing something on this topic? Make sure the topic you choose is relevant to your business.

Tip: If you’re writing your first e-book, pick a more general topic to begin with and zero in on specifics. If, for example, you’re a lighting company, write about energy-efficient lighting solutions before you write about the best light bulbs for a fast food restaurant.

• Shelf life: Content marketers love to throw around “evergreen” as an adjective. What does that mean, exactly? Evergreen content—or content with a long shelf life—remains relevant for a long time. If the topic you’re writing about could lose its relevance in the near future, avoid it. Unless you’re pumping out multiple e-books every month, always opt for more evergreen e-book topics.

Tip: Talk with internal subject matter experts to determine the evergreen quality of the topics you’re considering. Perhaps there’s something on the horizon that could dilute a topic and your subject matter expert is privy to that something.

• Industry specificity: Conglomerates aim to be world leaders. You simply want to be an industry leader. How do you do that? Set yourself apart by publishing long form content like e-books that demonstrate your expertise.

Tip: Keeping shelf life in mind, try to identify emerging topics to write about—things that you’re certain will get a lot of airtime in the days ahead. Those things are what people will be clambering to understand and your resource could well materialize as the go-to resource on that topic.

2. Source internal content

If the topic you chose is relevant, evergreen and industry-specific, you’ve probably written related content before. Call all of it up and scour it for repurposing. Are there blog posts, other content offers, or sections of other pieces you’ve written that could be repurposed for your e-book?

Grab all of that text, copy and paste it into a master document, and proceed to the next step.

3. Develop a skeleton

Talk with your team about the order of the information you’re presenting. Look for natural progressions. Write out big ideas and order them and identify connections between them. Make sure there’s little overlap between them.

Bullet out each big idea and use numbered lists where they’re fitting. Make sure you can find clear calls to action for the e-book itself and each section individually. 

4. Add flesh to your skeleton

Here’s where you’ll go through that master document and look at your repurposed content. Look at that side-by-side with your skeleton; see if you can plug any of it into your e-book-in-progress.

Start to fill out the skeleton with meaty pieces of text. That could be repurposed stuff or new ideas you conjure as you run through the skeleton.

This step is where your e-book begins to develop a look and a feel. Look closely at each section of the skeleton and add as many strong ideas as you can. Add punchy, weighty sentences that the sections can lean on.

Tip: As you add repurposed content, look for areas to paraphrase to better fit the context. Maybe your e-book has a theme that the text from the original blog post or content offer doesn’t carry. Figure out how to integrate it naturally, so there are no awkward sentences that might detract from the overall readability of your e-book.

5. Write with your head down

Now is when the real flesh-adding happens. Add as much relevant content as you can in this step. Don’t stop to edit. Don’t nitpick. Just go. Dump all of your thoughts here.

Look to elaborate on the points you’ve made thus far. Add illustrations and data as best you can. How can you give readers the best content possible?

6. Edit and polish

“Edit and polish.” Does that sound redundant? It’s not. Here’s what you want to do:

• Edit: Make sure your e-book is error-free and clean. Cut out redundancies and smooth out transitions.

• Polish: Look closely at verb choice—strengthen weak verbs like “get” and “are.” Look for out-of-theme areas and try to integrate e-book-native language as best you can throughout. 

7. Publish and promote

Whew. That was a lot of work. Now you need to focus on getting people to your e-book. How? Here are a few ideas:

• Create a compelling cover: If you don’t want to use a designer, use a site like canva.com to create an aesthetically-pleasing e-book cover.

• Create a landing page: Use a landing page and a lead-generating form dedicated to your newly created e-book.

• Create and disburse call-to-action (CTA) buttons: Carefully design CTA buttons for your e-book and integrate them into past and future posts. 

For more ideas to bring in leads (e-books aren’t the only way), register for a free webinar by clicking the image below.

frank kern cta How To Write an E Book that Gets Read and Shared

default ebookcover How To Write an E Book that Gets Read and Shared

The Non-writer’s Guide to Writing Marketing Copy that Attracts

Learn how to write copy that attracts

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Read More
« Older posts
  • Recent Posts

    • C’mon hooman
    • Build and Release Pipelines for Azure Resources (Logic Apps and Azure Functions)
    • Database version control: Getting started with Flyway
    • Support CRM with New Dynamics 365 Field Service Mobile App
    • 6 Strategies for Achieving Your Business Goals in the New Year
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited