I’m attempting to deal with a interpolating a function f[x,t] on a variable grid. My x-variable is continuous, so there’s no difficulty in fitting a cubic spline to it with InterpolationOrder->3 for f[x,1], but because I can’t violate causality and fit using future data, I need to ensure that InterpolationOrder->0 for f[1,t]. Is it possible to do this in any way using the Interpolate command? Or would the best way be to set up N different x-Interpolations at each of my measured t=t_i coordinates and connect them with some messy If[] statement?

## Tag Archives: Variable

## Curve fitting with variable number of Gaussian curve

I have the data which appears like

And I decided to use three Gaussian curves to fit it.

(This code allows me to manipulate all three curves and put in best guesses for peak positions. Using these guesses, Mathematica will find a set of three gaussian curves that are the closest match.)

```
\[Lambda]min=340;
\[Lambda]max=380;
model = height + amp1*Exp[-(x - x01)^2/sigma1^2] + amp2*Exp[-(x - x02)^2/sigma2^2] + amp3*Exp[-(x - x03)^2/sigma3^2]
findBestFitFromValues[{amp1guess_, x01guess_, sigma1guess_,
amp2guess_, x02guess_, sigma2guess_, amp3guess_, x03guess_,
sigma3guess_, heightguess_}] :=
FindFit[
rowData(*change this*), {model, {sigma1 > 0, sigma2 > 0, sigma3 > 0}}, {{amp1,
amp1guess}, {x01, x01guess}, {sigma1, sigma1guess}, {amp2,
amp2guess}, {x02, x02guess}, {sigma2, sigma2guess}, {amp3,
amp3guess}, {x03, x03guess}, {sigma3, sigma3guess}, {height,
heightguess}}, x];
With[
{
localModel =
model /.
{
amp1 -> amp1Var, amp2 -> amp2Var, amp3 -> amp3Var,
sigma1 -> sigma1Var, sigma2 -> sigma2Var, sigma3 -> sigma3Var,
x01 -> x01Var, x02 -> x02Var, x03 -> x03Var,
height -> heightVar
}},
Manipulate[
Column[{
Style["Match to Data", 12, Bold],
Show[rowDataPlot(*change this*),
Plot[localModel, {x, 1240/\[Lambda]max, 1240/\[Lambda]min},
PlotRange -> All, PlotStyle -> Black] ,Graphics[
{
Orange,Line[{{x01Var,0}, {x01Var,500}}],
Blue,Line[{{x02Var,0}, {x02Var,500}}],
Red,Line[{{x03Var,0}, {x03Var,500}}]
}
]],
Style["Chosen Curve", 12, Bold],
Plot[localModel, {x, 1240/\[Lambda]max, 1240/\[Lambda]min},
PlotRange -> All, PlotStyle -> Black, ImageSize -> 400]}
],
Delimiter, Style["Peak 1", 12, Bold],
{{amp1Var, 2000, Style["Amplitude 1", Orange]}, 0, 40000},
{{x01Var,
1240/\[Lambda]min - (1240/\[Lambda]min - 1240/\[Lambda]max) 1/5,
Style["center 1", Orange]}, 1.95, 3.6},
{{sigma1Var, 0.01, Style["sigma 1", Orange]}, 0.01, 0.3},
Delimiter, Style["Peak 2", 12, Bold],
{{amp2Var, 1660, Style["Amplitude 2", Blue]}, 0, 15000},
{{x02Var,
1240/\[Lambda]min - (1240/\[Lambda]min - 1240/\[Lambda]max) 2/5,
Style["center 2", Blue]}, 1.95, 3.6},
{{sigma2Var, 0.01, Style["sigma 2", Blue]}, 0.01, 0.3},
Delimiter, Style["Peak 3", 12, Bold],
{{amp3Var, 1445, Style["Amplitude 3", Red]}, 0, 10000},
{{x03Var,
1240/\[Lambda]min - (1240/\[Lambda]min - 1240/\[Lambda]max) 4/5,
Style["center 3", Red]}, 1.95, 3.6},
{{sigma3Var, 0.01, Style["sigma 3", Red]}, 0.01, 0.3},
Delimiter, Style["Height", 12, Bold],
{{heightVar, 15, Style["Height"]}, 0, 1000},
Delimiter,
Control[Button["click find nearest solution",
vals =
{amp1Var, x01Var, sigma1Var, amp2Var, x02Var, sigma2Var, amp3Var,
x03Var, sigma3Var,
heightVar} = {amp1, x01, sigma1, amp2, x02, sigma2, amp3, x03,
sigma3, height} /.
findBestFitFromValues[
{amp1Var, x01Var, sigma1Var, amp2Var, x02Var, sigma2Var,
amp3Var, x03Var, sigma3Var, heightVar}]]],
SaveDefinitions -> True
]
]
```

If I want to fit another data with four or more Gaussian curves, how can I re-write my code to obtain the curve fitting that I can specify the number of Gaussian curves? Not the above one where I constrain the number to be three.

## Confusing behavior when passing a variable vs. inlining a function call

Newbie question here. Consider the following two functions:

```
f1[n_] := (
q = {0, 0};
Do[q[[RandomInteger[{1, 2}]]] += 1, n];
Return[q]
)
f2[n_] := (
q = {0, 0};
Do[k = RandomInteger[{1, 2}]; q[[k]] += 1, n];
Return[q]
)
```

Both seem to be doing the same thing: create a list of zeros, increment a random element $ n$ times and return the list. The difference is that the first version “inlines” `RandomInteger`

call into the indexing, while the second defines an intermediate variable `k`

.

The function `f2`

works as expected, while `f1`

does not. For example, `f1`

sometimes returns lists for which sum of elements is not equal to the input `n`

, which seems very strange.

In[357]:= f1[10]

Out[357]= {6, 7}

Can someone point out why `f1`

and `f2`

are treated differently?

## 2 Answers

*Mathematica* is an expression rewriting language. When it evaluates:

```
q[[RandomInteger[{1, 2}]]] += 1
```

It first rewrites it as:

```
q[[RandomInteger[{1, 2}]]] = q[[RandomInteger[{1, 2}]]] + 1
```

The result is that `RandomInteger[{1, 2}]`

gets rewritten **twice**, possibly as different random integers.

## Using DAX Magic For Variable Forecasting

I often have inputs I know today and want to “run them out” into the future applying certain parameters along the way to create a projection. I’m in finance, so my needs will almost always have a financial planning bias, but there’s no reason why this wouldn’t have general mathematical, statistical, or scientific utility as well. Heck, I’ve gleaned lots from non-financial posts and applied them to my work. Inputs could be virtually anything in this DAX pattern. If you use a bottom-up approach to financial planning and analysis (FP&A), it should be especially easy to relate to this post. To keep things relatively straightforward, I’ll show you how to produce a labor cost projection using a project start date, hourly rate, and employee start and end dates.

Please click here to download the workbook to follow along.

**First, the setup**.

I created three tables in my “data model” (using quotes here because the tables are disconnected). I won’t spend too much time talking about getting data into the data model; however, I have been experimenting and becoming more comfortable with Power Query. Some P3 guru—maybe Ryan Sullivan—recently gave me a pointed piece of advice: “If you can put it in Power Query, do it.” Even though Power Query has been out of my comfort zone, after receiving this advice, I decided it was time to get uncomfortable. Boy, has it paid off!

Here’s a summary of what I did to set up these three tables:

**Calendar:** used List.Dates and set parameters to create a one-year table. In my work, which involves lots of projects, I use an approach I learned from Ken Puls’ blog to create a dynamic parameter so I can change the start/end dates of the projects and Power Query makes a table between those exact dates. It’s awesome.

**Project Start Date:** Excel table sucked into the data model via Power Query.

**Data Input:** Excel table sucked into the data model via Power Query.

So, I lied. I’m breaking the “if you can put it into Power Query, do it” rule in the name of connecting with you more on DAX calculated columns than M, which is beyond where I want to go with this post. In my **calendar table**, I created the following columns:

Day Number of Week: =

WEEKDAY ( [Date] )

Work Day: =

SWITCH ( [Day Number of Week], 1, 0, 7, 0, 1 )

Month: =

MONTH ( [Date] )

Year: =

YEAR ( [Date] )

Year Month: =

[Year] & “-“

& [Month]

Year Month Sort: =

( [Year] * 100 )

+ [Month]

The combination of “Day Number of Week” and “Work Day” gives me similar functionality to NETWORKDAYS in Excel, so I’m able to determine the exact number of work days per month. My company works in the Middle East where weekends fall on Fri/Sat instead of Sat/Sun, which has an impact on the number of workdays. In addition to giving me the NETWORKDAYS functionality, it also gives me the opportunity to move my workable days around to accommodate different schedules.

The rest of the calculated columns are not part of the pattern per se; they’re just part of my “visualization” requirements. I’ll leave you to your own. There is no shortage of information how to create calendar tables, here is just one Matt Allington penned that I often reference.

For **project start date**, I just need the date in a measure, so I use:

Project Start Date :=

MAX ( [Value] )

Finally, on the **data input** tab, I’ve created a table with a row ID (because every good table has a row ID—you’ll never know when you’ll need it!), employee name, hourly rate, and the start and end dates of the employee’s engagement on this project. I’ve given you a variety of start an end dates (i.e., duration and start/end times within a month) to show you the accuracy of this pattern.

**Here is a look at that data input table:**

**Second, calculating the projection.**

I’ll list the three measures that make this calculation work and then go into detail on the other side:

1. Num of Working Days :=

CALCULATE (

SUMX ( ‘Calendar’, ‘Calendar'[Work Day] ),

FILTER (

‘Calendar’,

‘Calendar'[Date] >= MAX ( DataInput[Start Date] )

&& ‘Calendar'[Date] <= MAX ( DataInput[End Date] )

)

)

2. Total Working Days :=

CALCULATE ( SUMX ( DataInput, [Num of Working Days] ) )

3. Labor Cost :=

CALCULATE (

SUMX ( DataInput, DataInput[Hourly Cost] * 8 * [Total Working Days] )

)

The first measure calculates the number of working days. The second iterates the result of #1 over the DataInput table and is also the measure used in our first report, which shows total days worked.

The third measure calculates the total cost using the hour cost multiplying by an eight-hour day and the total working days calculated in #2. “Labor Cost” is the measure used in our second report.

**Where you can go from here…**

I’ve given you the basic pattern; the application of it is only limited by your imagination. I’ve used this to calculate labor cost as above, apply a revenue calculation (or gross margin %), and utilize it within my first post to create a project forecast (i.e., ITD invoiced plus projection vs. project budget). I’ve also used this to take a lump sum amounts—say revenue forecasts from various streams and forecast—to project them out over time to better understand how we might collect on it in the future. Finally, I’ve used this as a calculator on the fly answering the question: “if I know X, Y, and Z, I wonder how that will look over time?” My answer to that: “Wait a sec.” Usually, it’s a dataset with hundreds of lines of inputs, so it makes any other way of calculating a result quickly pale in comparison.

It takes a special kind of mindset to “bend” data (and software!) to the human will. As this article demonstrates, we at PowerPivotPro can twist Power BI into a pretzel if that’s what an organization needs. (A robust, trustworthy, industrial-strength pretzel of course).

The data-oriented challenges facing your business require BOTH a nimble toolset like Power BI AND a nimble mindset to go with it. And as Val Kilmer / Doc Holladay once said, we’re your huckleberry.

## how to plot a function with integrand that has unknown variable (that unknown variable is like interval of x-axis in my plot)

for example, i want to plot a f(y,c)=f(2,c) vs c when the function is integrate ycos cx dx from 0 to pi, but somehow it can’t and it say:

NIntegrate::inumr: The integrand Cos[c x] has evaluated to non-numerical values for all sampling points in the region with boundaries {{0,6}}. >>

how to plot that? i’m so lost…

## Memory optimized table variable and cardinality estimate

In a previous blog, I talked about memory optimized table consumes memory until end of the batch. In this blog, I want to make you aware of cardinality estimate of memory optimized table as we have had customers who called in for clarifications. By default memory optimized table variable behaves the same way as disk based table variable. It will have 1 row as an estimate. In disk based table variable, you can control estimate by using option (recompile) at statement level (see this blog) or use trace flag 2453.

You can control the same behavior using the two approaches on memory optimized table variable if you use it in an ad hoc query or inside a regular TSQL stored procedure. The behavior will be the same. This little repro will show the estimate is correct with option (recompile).

create database IMOLTP

go

ALTER DATABASE imoltp ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA

go

ALTER DATABASE imoltp ADD FILE (name=’imoltp_mod1′, filename=’c:\sqldata\imoltp_mod2′) TO FILEGROUP imoltp_mod

go

use IMOLTP

go

CREATE TYPE dbo.test_memory AS TABLE

(c1 INT NOT NULL INDEX ix_c1,

c2 CHAR(10))

WITH (MEMORY_OPTIMIZED=ON);

go

DECLARE @tv dbo.test_memory

set nocount on

declare @i int

set @i = 1

while @i < 10000

begin

insert into @tv values (@i, ‘n’)

set @i = @i + 1

end

set statistics xml on

–this will work and the etimate will be correct

select * from @tv t1 join @tv t2 on t1.c1=t2.c1 option (recompile, querytraceon 2453)

set statistics xml off

go

But the problem occurs when you use it inside a natively compiled stored procedure. In this case, it will always estimate 1 row. You can’t change it because natively compiled procedure doesn’t allow statement level recompile. If you try to create a natively compiled procedure, you will get errors that disallow you to create the procedure.

create procedure test

with native_compilation, schemabinding

as

begin atomic with

(transaction isolation level = snapshot,

language = N’English’)

DECLARE @tv dbo.test_memory

declare @i int

set @i = 1

while @i < 10000

begin

insert into @tv values (@i, ‘n’)

set @i = @i + 1

end

–you can’t add TF 3453 or recompile

select t1.c1, t2.c1 from @tv t1 join @tv t2 on t1.c1=t2.c1 option (recompile, querytraceon 2453)

end

go

Msg 10794, Level 16, State 45, Procedure test, Line 17 [Batch Start Line 39]

The query hint ‘RECOMPILE’ is not supported with natively compiled modules.

Msg 10794, Level 16, State 45, Procedure test, Line 17 [Batch Start Line 39]

The query hint ‘QUERYTRACEON’ is not supported with natively compiled modules.

So that is the solution? For natively compiled procedure, here are some advices

1. limit number of rows inserted into the memory optimized table variable.

2. if you are joining with memory optimized table variable that has lots of rows, consider use a schema_only memory optimized table

3. if you know your data, you can potentially re-arrange the join and use option (force order) to get around it.

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

## Be aware of 701 error if you use memory optimized table variable in a loop

In blog “Importance of choosing correct bucket count of hash indexes on a memory optimized table”, I talked about encountering performance issues with incorrect sized bucket count. I was actually investigating an out of memory issues with the following error.

Msg 701, Level 17, State 103, Line 11

There is insufficient system memory in resource pool ‘default’ to run this query.

I simplified the scenario but customer’s code is very similar to the loop below. Basically, this customer tried to insert 1 million row into a memory optimized table variable and process them. Then he deleted the rows from the memory optimized table variable and inserted another 1 million. His goal was to process 1 billion rows. But before he was able to process 1 billion rows, he would run out of memory (701 error as above)

DECLARE @t2 AS [SalesOrderDetailType_inmem]

insert into @t2 select * from t2

while 1 = 1 –note that this customer didn’t use 1=1. I just simplified to reproduce it

begin

delete @t2

insert into @t2 select * from t2

end

This customer was puzzled because he delete existing rows. At any given time, there should not be more than 1 million rows. SQL Server should not have run out of memory.

This is actually by-design behavior documented in “Memory-Optimized Table Variables”). Here is what is state “Unlike memory-optimized tables, the memory consumed (including deleted rows) by table variables is freed when the table variable goes out of scope)”. With a loop like above, all deleted rows will be kept and consume memory until end of the loop.

Complete Repro

=============

Step 1 Create a disk based table and populate 1 million rows

CREATE table t2(

[OrderQty] [smallint] NOT NULL,

[ProductID] [int] NOT NULL,

[SpecialOfferID] [int] NOT NULL,

[LocalID] [int] NOT NULL

)

Step 2 create a type using memory optimized table

CREATE TYPE [SalesOrderDetailType_inmem] AS TABLE(

[OrderQty] [smallint] NOT NULL,

[ProductID] [int] NOT NULL,

[SpecialOfferID] [int] NOT NULL,

[LocalID] [int] NOT NULL,

INDEX [IX_ProductID] HASH ([ProductID]) WITH ( BUCKET_COUNT = 1000000),

INDEX [IX_SpecialOfferID] NONCLUSTERED (LocalID)

)

WITH ( MEMORY_OPTIMIZED = ON )

Step 3 run the following query and eventually you will run of memory

DECLARE @t2 AS [SalesOrderDetailType_inmem]

insert into @t2 select * from t2

while 1 = 1

begin

delete @t2

insert into @t2 select * from t2

end

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

## How to involve a variable in numerical integration?

Now I have two functions f[x] and g[x] which both contains another variable a, or we can note as f[a,x] and g[a,x]. Now I need to solve the equation `Solve[Integrate[f[a,x],{x,0,2Pi}]==Integrate[g[a,x],{x,0,2Pi}],a]`

to solve the value of a, however, f and g are both hard to do an analytical integral. So now I have to use NIntegrate instead, but the NIntegrate function seems cannot contain a variable a in it.

What can I do to solve the a?

## DAX now has variable support!

With the latest release of the Power BI designer that now supports measure creation we also snuck in another feature that is very useful in complicated measure scenario’s and for performance optimizations. DAX now supports variables. Lets take a look at what that means.

Here is an example that I use in my Power BI designer sessions where I calculated the future value of a principle amount, this is something that is very commonly used in the stock market. Excel has a function for this called FVSCHEDULE. Using our new PRODUCTX function it is pretty straightforward to implement yourself.

Lets take this example where I want to see what would happen when we apply a set of compound interest rates to the sales of a promotion. I loaded a simple table that gives me the rates I want to apply that I subsequently added as slicer:

Ok now for the calculations:

First I create a measure that compounds the interest rates using PRODUCTX:

Rates calc = PRODUCTX(Rates,1+[Rates])

This calculation will return the Product of 1 + [Rates] for each row in the Rate table.

Now when I want to use it I actually create another measure:

Future Investment = IF([Sum of SalesAmount], [Sum of SalesAmount] * [Rates calc])

This calculation will multiply the Sum of SalesAmount by the Rates Calc.

Together giving these results:

Ok now lets take a look at rewriting this using variables, in general the syntax to use variables is the following:

Measure name =

var varname =DAX formulavar varname2 =

DAX formulareturn varname + varname2

now writing the formula we created before using variables you get the following:

Future value variable =

var Ratescalc = Productx(Rates,1+[Rates])

var Revenue = [Sum of SalesAmount]

return if (Revenue, Revenue * Ratescalc )

Adding it to the visual:

Ok lets look at another example, variables can not just take single values, you can also use them to store tables:

testvar = var table1 = FILTER(Customer, [Sum of SalesAmount] > 20)

var table2 = FILTER(Customer , Customer[AgeGroup] = “1 < 25″)

var tableunion = UNION(table1,table2)

return COUNTROWS(tableunion)

Ok lets recap, why do you want to use variables?

- They can improve readability of your measures
- They can improve performance as measure values get stored into a variable and can be reused in other places without having to calculate the value several times.

I can write this YoY% measure:

SalesAmount PreviousYear=CALCULATE([Sum of SalesAmount], SAMEPERIODLASTYEAR(Calendar[Date]))

Sum of SalesAmount YoY%:=if([Sum of SalesAmount] ,

DIVIDE(([Sum of SalesAmount] – [SalesAmount PreviousYear]), [Sum of SalesAmount]))

using variables into:

YoY% = var Sales = [Sum of SalesAmount]

var SalesLastYear=CALCULATE([Sum of SalesAmount], SAMEPERIODLASTYEAR(‘Calendar'[Date]))

return if(Sales, DIVIDE(Sales – SalesLastYear, Sales))

First of all it is more readable (of course this is a matter of opinion :)) but second of all the [Sum of SalesAmount] measure is calculated 4 times if you also count the previous year measure. In the variable case [Sum of SalesAmount] is only executed twice. Now in this example it doesn’t really make a big difference but if your measure get more and more complicated this can really make a difference.

Ok enough for now Go download the Power BI designer

*This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.*

Kasper de Jonge PowerPivot and Power BI Blog » Kasper de Jonge PowerPivot and Power BI Blog

## New Twist for Dynamic Segmentation: Variable Grain Control

By **Avichal Singh** [**Twitter**]

Dynamic segmentation or banding has been covered in PowerPivotPro **articles** in the past and in **beautiful detail** by the Italians – Marco Russo, Alberto Ferrari (these folks are literally “**off the charts!**” in Matt’s representation of Power Pivot skill levels ).

It involves grouping your data in specific numeric bands or segments; for example looking at your sales data based on the price range of your products. You have a long list of price points for your products, instead of looking at each price point individually it would be nice to group them into segments say to represent the low, medium and high price items in your catalog.

**Hundreds of products at different list prices… => Grouped based on their List Price Range**

### Variable Range Selection

That is great, however it is hard to predefine segments that would work well in all scenarios. As your data changes over time, or as users slice and dice your existing data (e.g. filter to a specific region, product category or year) the segments may prove to be either too granular or not granular enough. In the case below, the predefined range does not have enough grain or detail and pretty much everything ends in one bucket ($ 3000-$ 4000).

**Predefined Segment Ranges may prove too granular or not granular enough
as you work your data**

What if your segments had a range of options from high to low granularity, so that you could choose the right segments based on the data and your need!

**Range can be chosen to show 1000s, 100s or 10s based on dataset**

**Download File**

Watch on YouTube or keep reading…

### Revisit Dynamic Segmentation

Let’s first quickly review dynamic segmentation using static predefined ranges. The way this works is using a disconnected table to define the segment ranges and then using them in a measure that calculates your numerical value.

**Static Price Range Table**

Sales Amount by Static Price Range:=CALCULATE([Total Sales Amount],

FILTER(Products,

Products[ListPrice]>=MIN(StaticPriceRange[Min Price]) && Products[ListPrice]<=MAX(StaticPriceRange[Max Price])

)

)

This would generate output similar to shown above.

However, once defined (at whichever grain you chose 1000s, 100s, 10s or something else) – you do not have the option to chose a different grain if needed. So let’s see how we can define segments of variable grain, so we can use the grain that makes best sense.

### Variable Range Selection

The trick is somewhat simple. You construct your segment range table slightly differently, as shown below, be defining multiple segment ranges of variable grain.

**New Price Range Table with Selector Columns of Variable Grains**

And define your measure slightly differently

Sales Amount by Price Range :=

CALCULATE (

[Total Sales Amount],

FILTER (

Products,

Products[ListPrice] >= MIN ( PriceRange[RangeId] )

&& Products[ListPrice] <= MAX ( PriceRange[RangeId] )

)

)

That’s it! Now you can choose the segment range that makes best sense with the data that you are viewing.

### Building Range Selection Tables

To keep things simple, we chose each row of our price range table to be $ 1 (indicated by **RangeID** column in image **above**). Thus we have 4000 rows to represent the range from 0 to 4000. That’s probably too many already; imagine if our range needed to be from 0 to say 1,000,000! Would we need a million tows.

Turns out, you do not need a row for each unit. You only need to define the range table, based on the **lowest granular range** that you want to see.

For example, if **10s** is the lowest grain we want, we can define our range table as below. You can see this one only has 400 rows to cover numerical range of 0 to 4000 (**10** x 400 = 4000). If you would choose 100s to be your lowest grain, you would only need 40 rows.

**New Range Selector Table fits in only 400 rows
governed by the lowest chosen grain**

The measure gets slightly redefined based on our new range table

Sales Amount by Price Range2 :=

CALCULATE (

[Total Sales Amount],

FILTER (

Products,

Products[ListPrice] >= MIN ( PriceRange2[Min Value] )

&& Products[ListPrice] <= MAX ( PriceRange2[Max Value] )

)

)

### Improving Graphical Display

As is typical rows where the measure returns blank are not shown in pivot or graph. While this may work for pivot, it looks downright weird on a graph – see image below.

**Axis looks weird with no continuity**

To address this we define a new incremental measure (based on our previous measure – you are following the **best practice #4** aren’t you?)

Sales Amount by Price Range_Show Zeroes :=

IF (

ISBLANK ( [Sales Amount by Price Range] ),

0,

[Sales Amount by Price Range]

)

And the graph looks much prettier now with a continuous axis. This is much less jarring to users as they drill down, up and across in the data set.

**Continuous Axis makes a lot more sense**

Power on my friend!**Download File** (Excel 2013)

*This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.*