Using Materialized Views with Big Data SQL to Accelerate Performance

One of Big Data SQL’s key benefits is that it leverages the great performance capabilities of Oracle Database 12c.  I thought it would be interesting to illustrate an example – and in this case we’ll review a performance optimization that has been around for quite a while and is used at thousands of customers:  Materialized Views (MVs).

For those of you who are unfamiliar with MVs – an MV is a precomputed summary table.  There is a defining query that describes that summary.  Queries that are executed against the detail tables comprising the summary will be automatically rewritten to the MV when appropriate:

In the diagram above, we have a 1B row fact table stored in HDFS that is being accessed thru a Big Data SQL table called STORE_SALES.  Because we know that users want to query the data using a product hierarchy (by Item), a geography hierarchy (by Region) and a mix (by Class & QTR) – we created three summary tables that are aggregated to the appropriate levels. For example, the “by Item” MV has the following defining query:

CREATE MATERIALIZED VIEW mv_store_sales_item
  select ss_item_sk,
         sum(ss_quantity) as ss_quantity,
         sum(ss_ext_wholesale_cost) as ss_ext_wholesale_cost,
         sum(ss_net_paid) as ss_net_paid,
         sum(ss_net_profit) as ss_net_profit
  from bds.store_sales
  group by ss_item_sk

Queries executed against the large STORE_SALES that can be satisfied by the MV will now be automatically rewritten:

SELECT i_category,
FROM bds.store_sales, bds.item_orcl
WHERE ss_item_sk = i_item_sk
  AND i_size in ('small', 'petite')
  AND i_wholesale_cost > 80
GROUP BY i_category;

Taking a look at the query’s explain plan, you can see that even though store_sales is the table being queried – the table that satisfied the query is actually the MV called mv_store_sales_item.  The query was automatically rewritten by the optimizer.

Explain plan with the MV:

Explain plan without the MV:

Even though Big Data SQL optimized the join and pushed the predicates and filtering down to the Hadoop nodes – the MV dramatically improved query performance:

  • With MV:  0.27s
  • Without MV:  19s

This is to be expected as we’re querying a significantly smaller and partially aggregated data.  What’s nice is that query did not need to change; simply the introduction of the MV sped up the processing.

What is interesting here is that the query selected data at the Category level – yet the MV is defined at the Item level.  How did the optimizer know that there was a product hierarchy?  And that Category level data could be computed from Item level data?  The answer is metadata.  A dimension object was created that defined the relationship between the columns:

      CATEGORY  ) 

Here, you can see that Items roll up into Class, and Classes roll up into Category.  The optimizer used this information to allow the query to be redirected to the Item level MV.

A good practice is to compute these summaries and store them in Oracle Database tables.  However, there are alternatives.  For example, you may have already computed summary tables and stored them in HDFS.  You can leverage these summaries by creating an MV over a pre-built Big Data SQL table.  Consider the following example where a summary table was defined in Hive and called csv.mv_store_sales_qtr_class.  There are two steps required to leverage this summary:

  1. Create a Big Data SQL table over the hive source
  2. Create an MV over the prebuilt Big Data SQL table

Let’s look at the details.  First, create the Big Data SQL table over the Hive source (and don’t forget to gather statistics!):

      I_CLASS VARCHAR2(100)
      ( csv.mv_store_sales_qtr_class
-- Gather statistics
exec  DBMS_STATS.GATHER_TABLE_STATS ( ownname => '"BDS"', tabname => '"MV_STORE_SALES_QTR_CLASS"', estimate_percent => dbms_stats.auto_sample_size, degree => 32 );

Next, create the MV over the Big Data SQL table:

CREATE MATERIALIZED VIEW mv_store_sales_qtr_class
       sum(s.ss_quantity) as ss_quantity,
       sum(s.ss_wholesale_cost) as ss_wholesale_cost,
       sum(s.ss_ext_discount_amt) as ss_ext_discount_amt,
       sum(s.ss_ext_tax) as ss_ext_tax,
       sum(s.ss_coupon_amt) as ss_coupon_amt,
    where s.ss_item_sk = i.i_item_sk
      and s.ss_sold_date_sk = date_dim_orcl.d_date_sk
    group by d.D_QUARTER_NAME,

Queries against STORE_SALES that can be satisfied by the MV will be rewritten:

Here, the following query used the MV:
- What is the quarterly performance by category with yearly totals?

       sum(s.ss_quantity) quantity
from bds.DATE_DIM_ORCL d, bds.ITEM_ORCL i, bds.STORE_SALES s
where s.ss_item_sk = i.i_item_sk
  and s.ss_sold_date_sk = d.d_date_sk
  and d.d_quarter_name in ('2005Q1', '2005Q2', '2005Q3', '2005Q4')
group by rollup (i.i_category, d.d_year, d.D_QUARTER_NAME)

And, the query returned in a little more than a second:

Looking at the explain plan, you can see that the query is executed against the MV – and the EXTERNAL TABLE ACCESS (STORAGE FULL) indicates that Big Data SQL Smart Scan kicked in on the Hadoop cluster.

MVs within the database can be automatically updated by using change tracking.  However, in the case of Big Data SQL tables, the data is not resident in the database – so the database does not know that the summaries are changed.  Your ETL processing will need to ensure that the MVs are kept up to date – and you will need to set query_rewrite_integrity=stale_tolerated.

MVs are an old friend.  They have been used for years to accelerate performance for traditional database deployments.  They are a great tool to use for your big data deployments as well!

Let’s block ads! (Why?)

Oracle Blogs | Oracle The Data Warehouse Insider Blog

Better Machine Learning Models with Multi-Objective Feature Selection: Part 1

The Basics of Feature Selection

Feature selection can greatly improve your machine learning models. In this blog series, I’ll outline all you need to know about feature selection. In Part 1 below I discuss why feature selection is important, and why it’s in fact a very hard problem to solve. I’ll detail some of the different approaches which are used to solve feature selection today.

Why should we care about Feature Selection?

There is a consensus that feature engineering often has a bigger impact on the quality of a model than the model type or its parameters. Feature selection is a key part of feature engineering, not to mention Kernel functions and hidden layers are performing implicit feature space transformations. Therefore, is feature selection then still relevant in the age of support vector machines (SVMs) and Deep Learning? Yes, absolutely.

First, we can fool even the most complex model types. If we provide enough noise to overshadow the true patterns, it will be hard to find them. The model starts to use the noise patterns of the unnecessary features in those cases. And that means, that it does not perform well. It might even perform worse if it starts to overfit to those patterns and fail on new data points. This is made even easier for a model with many data dimensions. No model type is better than others in this regard. Decision trees can fall into this trap as well as multi-layer neural networks. Removing noisy features can help the model focus on relevant patterns.

But there are other advantages of feature selection. If we reduce the number of features, models are generally trained much faster. And often the resulting model is simpler and easier to understand. We should always try to make the work easier for we model. Focus on the features which carry the signal over those that are noise and we will have a more robust model.

Why is this a hard problem?

Let’s begin with an example. Let’s say we have a data set with 10 attributes (features, variables, columns) and one label (target, class). The label column is the one we want to predict. We’ve trained a model on this data and determined the accuracy of the model built on data is 62%. Can we identify a subset of those 10 attributes where a trained model would be more accurate?

We can depict any subset of 10 attributes as bit vectors, i.e. as a vector of 10 binary numbers 0 or 1. Zero means that the specific attribute is not used, and 1 depicts an attribute which is used for this subset. If we want to indicate that we use all 10 attributes, we would use the vector (1 1 1 1 1 1 1 1 1 1). Feature selection is the search for such a bit vector that produces the optimal accuracy. One possible approach for this would be to try out all the possible combinations. Let’s start with using only a single attribute. The first bit vector looks like this:

As we can see, when we use the first attribute we come up with an accuracy of 68%. That’s already better than our accuracy with all attributes, 62%.  But can we improve this even more? Let’s try using only the second attribute:

Still better than using all 10 attributes, but not as good as only using the first.

We could continue to go through all possible subsets of size 1. But why we should stop there?  We can also try out subsets of 2 attributes now:

Using the first two attributes immediately looks promising with 70% accuracy. We can collect all accuracies of these subsets until we have tried all of the possible combinations:

We call this a brute force approach.

How many combinations did we try for 10 attributes? We have two options for each attribute: we can decide to either use it or not.  And we can make this decision for all 10 attributes which results in 2 x 2 x 2 x … = 210 or 1,024 different outcomes. One of those combinations does not make any sense though, namely the one which does not use any features at all. So, this means that we only need to try 210 – 1 = 1,023 subsets. Even for a small data set, we can see there are a lot of attribute subsets. It is also helpful to keep in mind that we need to perform a model validation for every single one of those combinations. If we use a 10-fold cross-validation, we need to train 10,230 models. It is still doable for fast model types on fast machines.

But what about more realistic data sets?  If we have 100 instead of only 10 attributes in our data set, we already have 2100 – 1 combinations bringing the number combination to 1,267,650,600,228,229,401,496,703,205,375. Even the largest computers can no longer perform this.

Heuristics to the Rescue!

Going through all possible attribute subsets is not a feasible approach then. We should however try to focus only the combinations which are more likely to lead to more accurate models. We could try to prune the search space and ignore feature sets which are not likely to produce good models. However, there is of course no guarantee that we will find the optimal solution any longer. If we ignore complete areas of our solution space, it might be that we also skip the optimal solution, but these heuristics are much faster than our brute force approach. And often we end up with a good, and sometimes even with the optimal solution in a much faster time. There are two widely used approaches for feature selection heuristics in machine learning. We call them forward selection and backward elimination.

Forward Selection

The heuristic behind forward selection is very simple. We first try out all subsets with only one attribute and keep the best solution. But instead of trying all possible subsets with two features next, we only try specific 2-subsets. We try the 2-subsets which contain the best attribute from the previous round. If we do not improve, we stop and deliver the best result from before, i.e. the single attribute. But if we have improved the accuracy, we continue trying by keeping the best attributes so far and try to add one more. We continue this until we no longer have to improve.

What does this mean for the runtime for our example with 10 attributes from above? We start with the 10 subsets of only one attribute which is 10 model evaluations. We then keep the best performing attribute and try the 9 possible combinations with the other attributes. This is another 9 model evaluations then. We stop if there is no improvement or keep the best 2-subset if we get a better accuracy. We now try the 8 possible 3-subsets and so on. So, instead of going brute force through all 1,023 possible subsets, we only go through 10 + 9 + … + 1 = 55 subsets. And we often will stop much earlier as soon as there is no further improvement.  We see below that this is often the case. This is an impressive reduction in runtime. And the difference becomes even more obvious for a case with 100 attributes. Here we will only try at most 5,050 combinations instead of the 1,267,650,600,228,229,401,496,703,205,375 possible ones.

Backward Elimination

Things are similar with backward elimination, we just turn the direction around. We begin with the subset consisting of all attributes first. Then, we try to leave out one single attribute at a time. If we improve, we keep going. But we still leave out the attribute which led to the biggest improvement in accuracy. We then go through all possible combinations by leaving out one more attribute. This is in addition to the best ones we already left out. We continue doing this until we no longer improve. Again, for 10 attributes this means that we will have at most 1 + 10 + 9 + 8 + … + 2 = 55 combinations we need to evaluate.

Are we done?  It looks like we found some heuristics which work much faster than the brute force approach. And in certain cases, these approaches will deliver a very good attribute subset. The problem is that in most cases, they unfortunately will not. For most data sets, the model accuracies form a so-called multi-modal fitness landscape. This means that besides one global optimum there are several local optima. Both methods will start somewhere on this fitness landscape and will move from there. In the image below, we have marked such a starting point with a red dot.  From there, we continue to add (or remove) attributes if the fitness improves. They will always climb up the nearest hill in the multi-modal fitness landscape. And if this hill is a local optimum they will get stuck in there since there is no further climbing possible. Hence, those algorithms do not even bother with looking out for higher hills. They take whatever they can easily get. Which is exactly why we call those “greedy” algorithms. And when they stop improving, there is only a very small likelihood that they made it on top of the highest hill. It is much more likely that they missed the global optimum we are looking for. Which means that the delivered feature subset is often a sub-optimal result.

Slow vs. Bad.  Anything better out there?

This is not good then, is it? We have one technique which would deliver the optimal result, but is computationally not feasible.  This is the brute force approach. But as we have seen, we cannot use it at all on realistic data sets.  And we have two heuristics, forward selection and backward elimination, which deliver results much quicker. But unfortunately, they will run into the first local optimum they find. And that means that they most likely will not deliver the optimal result.

Don’t give up though – in our next post we will discuss another heuristic which is still feasible even for larger data sets. And it often delivers much better results than forward selection and backward elimination. This heuristic is making use of evolutionary algorithms.

Let’s block ads! (Why?)


Creativity, Collaboration and Technology: Lessons from Joseph Gordon Levitt’s HitRecord

Posted by Barney Beal, Content Director

500 Days of Summer. Inception. This is the End. The Dark Knight Returns. Don Jon. What do all of these films have in common? Joseph Gordon Levitt. From childhood star to leading actor to writer and director, Levitt has built an impressive Hollywood resumé. What most people don’t realize, is that he is also an entrepreneur.

His endeavor, HitRecord, started as a site for artists and musicians to post their art, compositions, raps and more. It was meant to be a site to inspire, to share, to find commonness, and to critique. But as the site evolved, something happened. These artists and musicians actually started working together to make more art. Levitt and his brother Daniel, who helped to found HitRecord, realized that they were on to something.

The site was sparking these individuals, many of who had unique skill sets, to collaborate to create animations, short films, soundtracks, short stories, the list goes on. HitRecord evolved into a platform that induced creativity and collaboration. It is now an online community that works together as a production company.

Speaking at SuiteConnect at Oracle OpenWorld 2017, Levitt detailed the phenomenon of HitRecord, how it has changed how artists and musicians collaborate, and how it is making a real impact in the lives of the people that create the content (HitRecord has paid over $ 2.5 million to its content producers).

JGL Creativity, Collaboration and Technology: Lessons from Joseph Gordon Levitt’s HitRecord

Levitt shared this story to provide some advice for the crowd. Technology companies should encourage their people to be creative, together. Here are a few key takeaways from Levitt’s talk:

See things through to the end. You might start on one path with a company or product, and end up in a completely different place than you intended. Just because the end result is not what you envisioned doesn’t mean it isn’t successful.

Creativity is invaluable. Perspective, skill and imagination are all unique to the individual. Cherish each person’s creativity, and figure out how to maximize that to drive business value.

One is less than many. One person can write an incredible script. But having two or even three writers work on it can take the script to another level.

Collaboration is key. Enabling people and teams in different functions to work together-either through technology, communication platforms, or in-person office configurations-is essential to ensuring the highest possible outcome/result for a project.

Be open-minded to creative ideas. No great idea starts as a great idea. Evaluate the plan, bring in second opinions, and work closely with the creative team to get a full picture… visualize the potential.

Check out what is happening on HitRecord, and catch up on the SuiteConnect at OOW17 keynote.

Posted on Wed, November 22, 2017
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

12 Websites & Blogs Every Data Analyst Should Follow

While demand for data analysts is at an all-time high, the online community still leaves some to be desired. It can be difficult to find good, unbiased online resources and websites dedicated to data professionals. We’ve asked our own data analysts to tell us about some of their favorite sites and created this list of must-follow forums, data analytics blogs, and resource centers. We’re sure there are many additional great ones out there, so if you know of any please tell us in the comments!

List is organized in alphabetical order.

Cross Validated (Stack Exchange)

Cross Validated 12 Websites & Blogs Every Data Analyst Should Follow

Part of the Stack Exchange network of Q&A communities for developers, Cross Validated is a Q&A site for statistics, machine learning, data analysis, data mining, and visualization. A great place if you’re stuck with a professional question and need answers from fellow professionals.

Data Science and Beyond (Yanir Seroussi)

Yanir Seroussi 12 Websites & Blogs Every Data Analyst Should Follow

Mr. Seroussi is an independent data scientist and computer programmer who posts about solving practical problems in data science (such us migrating a web app from MongoDB to Elasticsearch). The blog is fluently written and highly detailed, complete with relevant code samples.

Data Science Central

data science central 12 Websites & Blogs Every Data Analyst Should Follow

This website by Vincent Granville offers both a social-community experience as well as a content repository with an endless flow of new articles posted on topics such as data plumbing, Hadoop, data visualization and more.

Learn to make your insights shine with our on-demand webinar “Telling a Story Through Data: Dashboard Design Guidelines”


DBMS2 12 Websites & Blogs Every Data Analyst Should Follow

A blog of sorts, written by Curt Monash of Monash Research and covering database management, data analytics, and related technologies. Offers well-written, comprehensive and vendor-neutral analysis from a technical and business perspective.


dzone1 12 Websites & Blogs Every Data Analyst Should Follow

DZone is an online community that publishes resources for software developers and covers topics from big data, AI, data science, and analytics. Their material is sourced from community members as well as influcencers within the tech space.

Edwin Chen’s Blog

Edwin Chen 12 Websites & Blogs Every Data Analyst Should Follow

While this blog is not updated very frequently, every post is a fascinating example of practical data analysis, often applied to a real-life use case, along with many clear and intuitive explanations of complex concepts in data science and machine learning.


KDnuggets 12 Websites & Blogs Every Data Analyst Should Follow

KDnuggets is one of the leading big data, data science, and machine learning sites. Content is from contributor, but edited by Gregory Piatetsky-Shapiro and Matthew Mayo, and ranges between tutorials to opinion pieces and everything inbetween.

KPI Library

KPI Library  12 Websites & Blogs Every Data Analyst Should Follow

While this website requires registration, it’s absolutely free to do so and once you’re in you have access to literally thousands of examples and suggestions for key performance indicators across dozens of industries, frameworks and business processes.

Simply Statistics

Simply Statistics 12 Websites & Blogs Every Data Analyst Should Follow

A site maintained by three professors of biostatistics, featuring a variety of articles and additional media on statistical techniques and deep data analysis. There are practical examples as well as theoretical material and the site is updated fairly regularly.

Statistical Inference, Causal Inference, and Social Science

andrew gelman 12 Websites & Blogs Every Data Analyst Should Follow

The articles here are contributed by six different writers, each writing from their own practical experience in modeling and analyzing data and covering a wide range of categories and topics.


rbloggers 12 Websites & Blogs Every Data Analyst Should Follow

Content hub that aggregates RSS feeds of bloggers who write about the popular open-source R language, and a great place to keep your R knowledge up to date and see what’s new in the R community.

What’s The Big Data?

whats the big data 12 Websites & Blogs Every Data Analyst Should Follow

Gil Press is a thought leader in the Big Data sphere and has contributed in developing some of the milestones in estimating the size and growth of digital data. His personal website and Forbes column are a great source for news and commentary on Big Data, data science, IoT and related topics.

Telling a Story Through Data yellow 12 Websites & Blogs Every Data Analyst Should Follow

Let’s block ads! (Why?)

Blog – Sisense

Marble In Spray Paint Can

0 Marble In Spray Paint Can

Dangerous.  Do not try.  Watch from the safety of You Tube.

Marbles from Empty Spray Cans

August 25, 2014


Let’s block ads! (Why?)


Use PowerShell: build a redist folder to install Dynamics CRM 2016 without an internet connection

We’ve had several customers ask about an updated PowerShellscript that would download the Dynamics 2016 pre-requisites and put them in to a folder for installations without internet access.  Since the previous two versions there’s been a change to the files required in 2016, so with a few tweaks and a little testing, we have a script to paste in to PowerShell and create our own CRM 2016 Redist folder with all the pre-requisite files.

Usually you find PowerShell scripts downloadable as PS1 files, for the sake of safety I’ve provided it as text as well as a txt download so you can review the script it before running it in PowerShell.  With that in mind, this script can be simply copied and pasted into PowerShell so you can quickly build your redist folder.

Instructions for use:

  1. Open PowerShell on the computer you have internet access on
  2. Copy the script below top to bottom (from the “#begin script” to the “#end script”) – NOTE you may need to edit your language code in the script if you are not installing the EN-US language
  3. Paste it right into PowerShell – if it doesn’t execute hit enter to run the “Create-CRM2016Redist” function
  4. This will pop up a folder picker, pick a folder and press OK
  5. After you press Ok, the script will create a new Redist folder in the destination you’ve selected it will then proceed to create the directory structure (11 Folders), then download 24 files, this should total about 216MB of disk space when it’s all done.
  6. Finally, once it has completed, copy the redist folder to the install folder containing: Server, Client, EmailRouter, and BIDSExtensions folders
  7. When you’re done copying your install folder should look like the graphic below:

Download the PowerShell Script as a .txt file.

#begin Script 
#Function to Show an Open Folder Dialog and return the directory selected by the user. 
function Read-FolderBrowserDialog([string]$  Message, [string]$  InitialDirectory) 
    $  app = New-Object -ComObject Shell.Application 
    $  folder = $  app.BrowseForFolder(0, $  Message, 0, $  InitialDirectory) 
    if ($  folder) { return $  folder.Self.Path } else { return '' } 
#download pre-req function, also creates the folders 
function dlPreReq($  root, $  folderName, $  fileName, $  url)
  $  fldr = Join-Path -Path $  root -Child $  folderName
  $  dest = Join-Path -Path $  fldr -Child $  fileName
  #create folder if it doesnt exist 
  if((Test-Path -Path $  fldr) -ne $  True)
    New-Item -Path $  fldr -ItemType directory | out-null
  Write-Host ("Downloading {0} to path: {1} " -f $  fileName, $  fldr)
  $  wc = New-Object
  $  wc.downloadFile($  url,$  dest)
#download each pre-req 
function Create-CRM2016Redist()
  $  linkRoot = ""
  $  langCode = "ENU" 
  $  LHex = 0x409 #must match above langCode
  $  folderRoot = (Read-FolderBrowserDialog "Pick the location to create the Dynamics CRM 2013 redist folder") #folder root
  if(($  folderRoot.length) -gt 0)
    $  fr = Join-Path -Path $  folderRoot -Child "Redist"
    dlPreReq $  fr dotNETFX "NDP452-KB2901907-x86-x64-AllOS-ENU.exe" "$  ($  linkRoot)328855&clcid=$  ($  LHex)"
    dlPreReq $  fr WindowsIdentityFoundation Windows6.0-KB974405-x86.msu "$  ($  linkRoot)190775&clcid=$  ($  LHex)"
    dlPreReq $  fr WindowsIdentityFoundation Windows6.0-KB974405-x64.msu "$  ($  linkRoot)190771&clcid=$  ($  LHex)"
    dlPreReq $  fr WindowsIdentityFoundation Windows6.1-KB974405-x86.msu "$  ($  linkRoot)190781&clcid=$  ($  LHex)"
    dlPreReq $  fr WindowsIdentityFoundation Windows6.1-KB974405-x64.msu "$  ($  linkRoot)190780&clcid=$  ($  LHex)"
    dlPreReq $  fr SQLNativeClient sqlncli_x64.msi "$  ($  linkRoot)178252&clcid=$  ($  LHex)"
    dlPreReq $  fr SQLSharedManagementObjects SharedManagementObjects_x64.msi "$  ($  linkRoot)293644&clcid=$  ($  LHex)"
    dlPreReq $  fr SQLSystemCLRTypes SQLSysClrTypes_x64.msi "$  ($  linkRoot)293645&clcid=$  ($  LHex)" 
    dlPreReq $  fr ReportViewer "ReportViewer.msi" "$  ($  linkRoot)390736&clcid=$  ($  LHex)"
    dlPreReq $  fr SQLExpr SQLEXPR_x86_$  langCode.exe "$  ($  linkRoot)403076&clcid=$  ($  LHex)" 
    dlPreReq $  fr SQLExprRequiredSp SQLEXPR_x86_$  langCode.exe "$  ($  linkRoot)403077&clcid=$  ($  LHex)"
    dlPreReq $  fr SQLCE SSCERuntime_x86-$  langCode.exe "$  ($  linkRoot)253117&clcid=$  ($  LHex)"
    dlPreReq $  fr SQLCE SSCERuntime_x64-$  langCode.exe "$  ($  linkRoot)253118&clcid=$  ($  LHex)"
    dlPreReq $  fr MSI45 Windows6.0-KB942288-v2-x86.msu "$  ($  linkRoot)139108&clcid=0x409"
    dlPreReq $  fr MSI45 Windows6.0-KB942288-v2-x64.msu "$  ($  linkRoot)139110&clcid=0x409"    
    dlPreReq $  fr VCRedist vcredist_x86.exe "$  ($  linkRoot)402042&clcid=$  ($  LHex)"
    dlPreReq $  fr VCRedist vcredist_x64.exe "$  ($  linkRoot)402059&clcid=$  ($  LHex)"
    dlPreReq $  fr VCRedist10 vcredist_x86.exe "$  ($  linkRoot)404261&clcid=$  ($  LHex)"
    dlPreReq $  fr VCRedist10 vcredist_x64.exe "$  ($  linkRoot)404264&clcid=$  ($  LHex)"
    dlPreReq $  fr IDCRL wllogin_32.msi "$  ($  linkRoot)194721&clcid=$  ($  LHex)"
    dlPreReq $  fr IDCRL wllogin_64.msi "$  ($  linkRoot)194722&clcid=$  ($  LHex)"
    dlPreReq $  fr WindowsIdentityFoundationExtensions "MicrosoftIdentityExtensions-64.msi" ""
    dlPreReq $  fr Msoidcrl msoidcli_32bit.msi "$  ($  linkRoot)317650&clcid=$  ($  LHex)"
    dlPreReq $  fr Msoidcrl msoidcli_64bit.msi "$  ($  linkRoot)317651&clcid=$  ($  LHex)"    
    write-host "No folder selected, operation was aborted. Run Create-CRM2016Redist to retry."

#kick off the script 

#End Script 

Let’s block ads! (Why?)

Dynamics CRM in the Field

The smart city revolution will depend on local leadership

 The smart city revolution will depend on local leadership

From autonomous vehicles to automated everything, the pace of smart city technology is accelerating, sparking equal parts enthusiasm and anxiety. Industry and government leaders around the world are looking for guidance as they attempt to navigate the unknowns accompanying these shifts. It turns out that looking inward to the middle of the U.S. may yield some of the greatest insights.

Here’s how some policymakers in cities across the U.S. have been preparing for the future.

Collaborate across sectors

Through my platform, Digi.City, I host a multi-city series of discussions with lawmakers, government officials, tech leaders, and corporations. Across the nation, leaders from rural and urban areas, at statewide and local levels, and from the public and private sectors are readying themselves to lead the smart city revolution.

Denver is teaming with Panasonic to create a mini smart city test ground, while San Diego is working with Qualcomm, GE, and other private sector companies to launch a large-scale internet of things (IoT) network to power the next wave of smart devices.

During a roundtable I hosted in Indianapolis, city deputy mayor Angela Smith Jones emphasized the importance of partnering with other local and state entities, as well as academic and private sectors. For example, 16 Tech is a proposed innovation zone adjacent to the Indiana University-Purdue University Indianapolis (IUPUI) campus and catalyzed by anchor tenant The Indiana Biosciences Research Institute (IBRI). It is surrounded by two bodies of water — White River and Fall Creek — making it an ideal location to test smart water technology. Global Water Technologies, an industry leader in water efficiency, recently proposed a living laboratory at 16 Tech to showcase the benefits of smart technology.

Dave Brodin, chief operating officer for Smithville Fiber — Indiana’s largest independent telecommunications broadband provider — agrees with this line of thinking. In 2015, Smithville Fiber reached an agreement with Jasper, Indiana (population 16,000) to build a high-speed gigabit network that will reach the entire municipality by 2018.

Brodin explained that Jasper made it easy to collaborate directly with city leadership through a request for proposal (RFP) process that created an open dialogue between the city and trusted operators. This made it possible to create a plan tailored to the specific communities involved. Brodin noted that other cities have erected what he views as unnecessary roadblocks, such as charging permitting fees that could amount to thousands of dollars per connection. “In that case, we have had to walk away,” he said.

Eliminate regulatory roadblocks

The cities that will be able to leverage smart technology to its full potential already know what regulatory roadblocks might be standing in their way.

Indiana state senator Brandt Hershman illustrated his state’s approach to 5G — the next generation mobile networks that will power widespread IoT adoption, along with smart city innovation. This past legislative session, the state passed a measure that eases the path forward on small cell deployment, a key facet of powering these ultra-fast, hyper-responsive wireless networks.

Drawing comparisons to more traditional types of investment, Senator Hershman noted, “If it were Honda and Toyota coming to the state, we write them a check. But when it comes to 5G wireless, why do we want to put up barriers?”

Arizona followed suit earlier this year, passing legislation that cleared the pathway to small cell and 5G deployment. This “open for business” regulatory climate creates an enticing environment for a slew of high-tech companies and projects. As a result, the economy is growing, consumers are benefiting, and investors are responding.

Get residents onboard

There are myriad technological solutions already available to help cities deliver key services more efficiently. From LED lights to traffic signals integrated with transportation platforms to sensor-laden trash cans that can measure air quality, cities have an amazing opportunity to do more without breaking the bank.

But as with any innovation, not everyone may understand how adding sensor technology can benefit them. Cities that want to be on the forefront of the smart city revolution should join forces with tech innovators, community advocates, and university researchers to come up with creative ways to help residents see how smart technology can benefit their community.

In Chicago, computer scientist and urban data expert Charlie Catlett is leading an “Array of Things” pilot between private sector leaders, including the University of Chicago, Argonne National Laboratory, and Lane Tech High School. The project teaches 150 high school students high-tech skills like data analysis by giving them access to data from 500 sensors, as well as teaching soft skills like problem solving and teamwork. Catlett said “It’s about empowering students to see smart city technology not as something some company does, [but] as an opportunity to make a difference.”

These kinds of partnerships help cities and their corporate partners by creating a single action plan that all stakeholders can get behind.

Chelsea Collier is the founder of Digi.City, a platform for smart city technology and policy. She also serves as Editor-At-Large for Smart City Connect, is a Co-Founder of Impact Hub Austin, a Sr Advisor for Texans for Economic Progress and served as a Zhi-Xing Eisenhower Fellow in 2016. 

Let’s block ads! (Why?)

Big Data – VentureBeat

Still More SQL Server Features that Time Forgot

 Still More SQL Server Features that Time Forgot

The series so far:

  1. The SQL Server Features that Time Forgot: Data Quality Services, Master Data Services, Policy-Based Management, Management Data Warehouse and Service Broker
  2. More SQL Server Features that Time Forgot: Auto-shrink, Buffer-pool extension, Database Diagrams, Database Engine Tuning Advisor, and SQL CLR
  3. Even more SQL Server Features that Time forgot: In-Memory OLTP, lightweight pooling, the sql_variant data type, stretch databases, transaction savepoints, and XML indexes
  4. Still More SQL Server Features that Time Forgot: Active Directory Helper Service, Data Transformation Services, DBCC commands, English Query, Native XML Web Services, Northwind and pubs databases, Notification Services, SQL Mail, SQL Server Distributed Management Objects, Surface Area Configuration Tool, utilities, and Web Assistant

In the previous articles of this series, we focused on SQL Server components that are, for better or worse, still part of the product. We covered such features as Service Broker, auto-shrink, database diagrams, XML indexes, and a variety of others. I picked these features because of the buzz they’ve generated over the years and the landslide of opinions that went with it.

Despite all the brouhaha, Microsoft seems determined to keep these components in play, at least in the foreseeable future. Not all features have been so lucky. SQL Server’s history is checkered with memories of features past, components deprecated or dropped during one of the product’s many release cycles, sometimes with little fanfare. Many of these features have generated their own fair share of controversy, either because of how they were implemented or because they were removed. Other components have barely been missed.

Here we look at a number of features that were once part of SQL Server and have since been removed or deprecated, with some being dismissed many years back. For the most part, I’ve listed the features in alphabetical order to avoid prioritizing them or editorializing too much on their departure. You can think of this article as a trip down memory lane, without the nostalgia or remorse that often accompanies such reflection. Mostly it’s just a way to have some fun as we finish up this series.

Active Directory Helper Service

The Active Directory Helper Service, MSSQLServerADHelper, was introduced in SQL Server 2000 to help integrate SQL Server with Active Directory (AD). The service made it possible for the SQL Server service to register itself in an AD domain. In this way, the SQL Server service could run under a domain account with local administrative rights, while being able to add or remove AD objects related to the SQL Server instance.

Only one instance of the Helper Service ran on a host server, regardless of the number of SQL Server instances installed on that host. The service ran only when the SQL Server service needed to access AD. The Helper Service also played a role in replication and SQL Server Analysis Services (SSAS). To support the service, SQL Server included three system stored procedures: sp_ActiveDirectory_Obj, sp_ActiveDirectory_SCP and sp_ActiveDirectory_Start.

Microsoft discontinued the Helper Service in SQL Server 2012, removing the service and its associated stored procedures from the product. The company provided few specifics for why the service was removed, but it appears that the service was simply no longer being used.

Data Transformation Services

Anyone who’s been around SQL Server for any length of time will no doubt remember Data Transformation Services (DTS), that loveable collection of features and tools for carrying out data extract, transform and load (ETL) operations.

First introduced in SQL Server 7, DTS provided the components necessary connect to SQL Server and other data sources in order to import or export data and transform it along the way. Prior to that, database developers had to rely on utilities such as bcp to move data from one place to another, with few useful tools for efficiently transforming the data.

With DTS developers could define savable packages that connected to heterogeneous data sources and performed ETL operations. They could then run the packages on demand or schedule them to run at regular intervals.

Unfortunately, DTS had a number of limitations, especially when considered against the backdrop of a rapidly changing data culture. For this reason, Microsoft effectively ditched DTS in SQL Server 2005 and offered in its place SQL Server Integration Services (SSIS), a far more robust ETL tool that included advanced control flow, error handling, and transformation capabilities, along with a number of other new and improved features.

DBCC gang

Over the years, Microsoft has introduced and then removed an assortment of SQL Server DBCC statements. (DBCC is short for Database Console Commands.) One of these statements was DBCCDBREPAIR, which provided a quick way to drop a damaged database. In SQL Server 2005, Microsoft gave this statement the boot, informing customers that they should instead use the DROPDATABASE statement going forward.

Another DBCC statement that Microsoft finally ousted was DBCCNEWALLOC, which could be used to verify data and index page allocation within the extent structure. Starting with SQL Server 2000, Microsoft included the statement only for backward compatibility, removing it altogether in SQL Server 2014.

A couple other DBCC statements that have been laid to rest are DBCCPINTABLE and DBCCUNPINTABLE. The first was used to mark a table as pinned, and the second to mark it as unpinned. If a table were pinned, the database engine would not flush the table’s pages from memory.

Microsoft introduced the ability to pin a table in SQL Server 6.5 as a way to boost performance. Unfortunately, pinning a table resulted in adverse effects, such as damaging the buffer pool or causing the server to run out of memory. It wasn’t long before Microsoft disabled these statements, although they’re still part of the T-SQL lexicon. They just don’t do anything.

The DBCCROWLOCK is another statement that goes back to SQL Server 6.5. The statement enabled Insert Row Locking (IRL) operations on a database’s tables. However, this capability became unnecessary because Microsoft soon automated row locking. In fact, by SQL Server 2000, the statement was included for backward compatibility only, although it wasn’t until SQL Server 2014 that Microsoft finally removed the statement.

Microsoft also removed the DBCCTEXTALL and DBCCTEXTALLOC statements from SQL Server 2014. The DBCCTEXTALL statement verified the integrity of the text, ntext, and image columns for all tables in a database. The DBCCTEXTALLOC statement did the same thing, but only for a specified table. Both statements originated with SQL Server 6.5 and by SQL Server 2000 were included for backward compatibility only.

No doubt, plenty of other T-SQL statements have come and gone, many without leaving a paper trail, but SQL Server 2014 seemed particularly hard on DBCC statements. Perhaps Microsoft saw that as a good time to do a bit of house-cleaning.

English Query

Introduced in SQL Server 6.5, English Query made it possible to automatically transform a question or statement written in English into a T-SQL statement. Microsoft offered English Query as part of SQL Server and as a standalone product.

English Query included a development environment and runtime engine to support the query transformation process. Ideally, an end user could type a question into an application’s text box, English Query would interpret the question and generate the T-SQL query, and the database engine would return the results, just like any other query.

In SQL Server 2005, Microsoft ditched English Query altogether. From then on, customers could no longer install or upgrade the product. However, if they upgraded a SQL Server 2000 instance to SQL Server 2005, and English Query had been implemented in the original installation, the English Query component would still work. In addition, customers with a SQL Server 2005 license could apparently install SQL Server 2000 and then use English Query against a SQL Server 2005 database, but those days are long gone.

Like many SQL Server features, English Query received an assortment of mixed reviews. Some developers liked it and made use of it. Others did not. At some point, Microsoft must have determined there was not enough interest in the feature to bother, so English Query got the axe, which came as a surprise to a number of users.

Perhaps in this case, Microsoft had been ahead of its time. When you consider how far we’ve come with technologies such as Siri, Google Assistant, and even Cortana, the potential for English Query was certainly there.

Native XML Web Services

In SQL Server 2005, Microsoft added Native XML Web Services to provide a standards-based structure for facilitating access to the database engine. Using these services, an application could send a Simple Object Access Protocol (SOAP) request to a SQL Server instance in order to execute T-SQL batch statements, stored procedures, or scalar-valued user-defined functions.

To carry out these operations, a SOAP/HTTP endpoint had to be defined on the server to provide a gateway for HTTP clients issuing SOAP requests. The T-SQL modules (batch statements, procedures, and functions) were made available as web methods to the endpoint users. Together these methods formed the basis of the web service.

Microsoft deprecated the Native XML Web Services in SQL Server 2008, with the recommendation that any SOAP/HTTP endpoints be converted to ASP.NET or Windows Communications Foundation (WCF) endpoints. These newer technologies were considered more robust, scalable, and secure than SOAP. Microsoft eventually removed the Native XML Web Services feature altogether.

Northwind and pubs databases

Who among us does not remember the pubs and Northwind databases? Even today, you can find references to them strewn across the web (mostly in outdated resources). They certainly deserve a mention as we stroll down memory lane.

The pubs database was developed by Sybase and came to Microsoft as part of the Sybase-Microsoft partnership in the early ’90s. The database included about 10 or so tables, based on a bookstore model. For example, the database contained the Titles, Authors, and Publishers tables, among several others. The pubs database provided a clean and simple example for demonstrating such concepts as many-to-many relationships and atomic data modeling.

But the pubs database was too basic to demonstrate more complex data modeling concepts and SQL Server features, so with the release of SQL Server 2000, Microsoft also introduced the Northwind database, which had its origins in Microsoft Access. The SQL Server team coopted the database to provide a more useful example of database concepts, without having to do a lot of the work themselves.

The Northwind database was based on a manufacturing model and included such tables as Customers, Orders and Employees. The database was still relatively simple, offering only a few more tables than the pubs database, but it helped to demonstrate slightly more complex relationships, such as hierarchical data. With the release of SQL Server 2005, the Northwind database was usurped by the now infamous AdventureWorks database.

Notification Services

Microsoft introduced Notifications Services in SQL Server 2000 to provide a platform for developing and deploying applications that generated and sent notifications to subscribers. Notification Services allowed developers to build applications that could send critical information to customers, employees, or other types of users when data changed in a specified way.

Developers could set up the service to generate and send notifications whenever triggering events occurred. In addition, subscribers could schedule notifications to be generated and sent at their convenience. The service could be configured to send notifications to a subscriber’s email account, cell phone, personal digital assistant (PDA), or Windows Messenger account.

Microsoft pulled the plug on Notification Services in SQL Server 2008 because the feature was not being implemented enough, which I doubt surprised many. Notification Services had a reputation for being inflexible, confusing, and difficult to implement, requiring a great deal of patience just to get a solution up-and-running. That said, some developers were able to make Notification Services work and thought it could do some cool stuff, but they seemed to be the exception. For most, getting to that point wasn’t worth the effort.

After pulling Notification Services from the product, Microsoft recommended that users turn to SQL Server Reporting Services (SSRS) and take advantage of such features as data-driven subscriptions.

SQL Mail

I’m not sure when Microsoft introduced SQL Mail, but it was there in the early days of SQL Server, providing users with a tool for sending, receiving, deleting, and processing email messages. Best of all, the service could send messages that included T-SQL query results.

SQL Mail used the Extended Messaging Application Programming Interface (MAPI) to communicate with an external email server and process email messages. However, to make this possible, an application that supported Extended MAPI also had to be installed on the server that hosted the SQL Server instance. The application would then provide SQL Server with the Extended MAPI components needed to communicate with the email server.

Microsoft introduced Database Mail in SQL Server 2005 as a replacement to SQL Mail because Database Mail was more robust and secure and offered better performance. Database Mail was also based on the Simple Mail Transfer Protocol (SMTP), rather than MAPI, and did not require that a local email application be installed. Microsoft finally dropped SQL Mail in SQL Server 2012.

SQL Server Distributed Management Objects

SQL Server Database Management Objects (SQL-DMO) were a collection of programming objects that facilitated database and replication management. The objects made it possible to automate repetitive administrative tasks as well as create and manage SQL Server objects and SQL Server Agent jobs, alerts, and operators. Developers could create SQL-DMO applications using any OLE Automation controller or COM client development platform based on C or C++.

By SQL Server 2005, SQL-DMO could no longer keep up with the new capabilities being introduced into the database platform. The time had come to replace the aging APIs. As a result, Microsoft introduced SQL Management Objects (SMO), a more robust set of APIs for administering SQL Server. SMO offered advanced caching and scripting features, along with a number of other capabilities, such as delayed instantiation.

To support backward compatibility, Microsoft continued to include SQL-DMO until SQL Server 2012, when it was dropped unceremoniously from the product. The thinking, no doubt, was that seven years was long enough for developers to update their apps and move into the 21st century.

Surface Area Configuration Tool

Remember the Surface Area Configuration Tool? It was introduced in SQL Server 2005 and dropped in SQL Server 2008, making it one of the product’s most short-lived features. The idea behind it was to improve security by providing a centralized tool for limiting the number of ways that would-be hackers and cybercriminals could gain access into the SQL Server environment.

The Surface Area Configuration Tool made it possible for administrators to enable, disable, start, or stop SQL Server features and services, as well as control remote connectivity. The tool leveraged WMI to facilitate these capabilities. Microsoft also made a command-line version of the tool available.

After dropping the Surface Area Configuration Tool, Microsoft recommended that users turn to such tools as SQL Server Management Studio (SSMS), SQL Server Configuration Manager, and policy-based management.

Utility hit list

As with many SQL Server components that have come and gone over the years, so too have an assortment of command-line utilities. Take, for example, the isql utility, which would let users run T-SQL statements, stored procedures, and script files from a command prompt. The utility used the old DB-Library protocol to communicate with SQL Server. Microsoft stopped including the isql utility in SQL Server 2005, pointing users to the sqlcmd utility as a replacement.

A similar utility, osql, does pretty much everything the isql utility did, except that it uses the ODBC protocol. However, the osql utility has been deprecated since at least SQL Server 2012 and will likely be pulled from the product in the not-too-distant future.

The same fate is in store for the sqlps utility, which launches a Windows PowerShell session, with the SQL Server PowerShell provider and related cmdlets loaded and registered.

Another deprecated utility is sqlmaint, which is slated to be removed after SQL Server 2017. The sqlmaint utility carries out database maintenance operations, such as performing DBCC checks, backing up database and log files, rebuilding indexes, and updating statistics. Going forward, DBAs should use the SQL Server maintenance plan feature instead.

A couple other deprecated utilities are makepipe and readpipe, which are used to test the integrity of the SQL Server Named Pipe services. Both utilities will soon be removed. In fact, they’re not even installed during setup, although they can still be found on the installation media. Same goes for the odbcping utility, which tests the integrity of an ODBC data source and verifies client connectivity.

Web Assistant

The Web Assistant, which I believe was introduced in SQL Server 7, offered a wizard for generating static HTML pages that contained SQL Server data. The wizard used a set of related system stored procedures to build the pages initially and to rebuild them if the data changed. The pages were fairly rudimentary, even by late ’90s standards, but were adequate enough for simple use cases.

With the release of SQL Server 2005, Microsoft did away with the wizard and kept only the stored procedures, which finally got dumped in SQL Server 2014. Whether anyone used the procedures after the wizard was removed is hard to say. Whether they used the wizard before that is even more of a mystery. I doubt many even noticed the procedures were gone.

To the best of my knowledge, Microsoft has not tried to replace this feature, perhaps deciding that static web pages provided little value, that HTML development has gotten far too sophisticated, that SSRS is more than adequate, or that a relational database management system was not the best place to be playing at HTML development. For whatever reason, Web Assistant and all of its offspring are gone for good.

History in the making

There are undoubtedly plenty of other SQL Server features that have gone missing over the years, in addition to what we’ve covered here. Perhaps you recall a few components that have a special place in your heart. Given how SQL Server has expanded and evolved over the years, it would be difficult to catch them all, especially if you also consider SSRS, SSAS, SSIS, or various other components. Whether or not you agree with their demise is another matter altogether.

Let’s block ads! (Why?)

SQL – Simple Talk

Microsoft Dynamics 365 ♥ Azure App Services

CRM Blog Microsoft Dynamics 365 ♥ Azure App Services

Microsoft Dynamics 365 ♥ Azure App Services

Peanut Butter & Jelly ….. Cookies & Milk ….. Han Solo & Chewbacca

There’s been some great pairings that have stood the test of time.  Individually they each have their unique strengths, but combining them makes an unstoppable duo!  I’ve been working in the Microsoft space for my entire career, and want to nominate another amazing couple:  Microsoft Dynamics 365 and Microsoft Azure App Services

What is Dynamics 365 CRM?

Dynamics 365 CRM stands for customer relationship management. It’s a category of integrated, data-driven solutions that improve how you interact and do business with your customers. CRM systems and applications are designed to manage and maintain customer relationships, track engagements and sales, and deliver actionable data—all in one place.

Dynamics 365 is a very powerful platform, but one platform cannot do it all.

What if you need to run a consistent background process?

What if you need to build a custom web interface or API?

What if you need to present data or documents publicly across the internet?

No need to worry because Microsoft Azure App Services can handle these tasks.

What is Microsoft Azure App Services?

The Microsoft Azure App Service (MAAS) is a platform-as-a-service (PaaS) offering from Microsoft Azure.  This service allows you to create WebApps for custom interfaces and Application Program Interfaces (APIs), and WebJobs to automate complex tasks.  MAAS runs apps on fully managed virtual machines (VMs), and are easy to scale when additional resources are needed.  Here are some of the reasons why the MAAS platform and Dynamics 365 complement each other:


MASS and Dynamics 365 run on top of the .NET framework, making it very easy to integrate the two platforms using their APIs.

Dynamics 365 assembly limitations

Dynamics 365 utilizes custom assemblies to execute custom code within a process.  These processes are bound to certain limitations, such as an execution timeout and the inability to include foreign assemblies.  The MASS platform is not bound to these limitations, and can execute any code that is required.


Dynamics 365 processes execute based on a data driven event. The MASS platform can execute based on a time driven event, which allows it the ability to perform repetitive automated functions based on the time of day.


All HTTPS web traffic between MAAS and Dynamics is encrypted using a sha256RSA algorithm.  The certificate implemented to encrypt the data is issued By Microsoft IT SSL SHA2.  You can read Microsoft’s issuers statement here. 


The MAAS platform is International Standards Organization (ISO), Service Organization Controls(SOC) and Payment Card Industry (PCI) compliant.  You can read Microsoft’s compliance offerings here.

Global scale with high availability

The MAAS platform can scale up or out manually or automatically. Microsoft’s global data center infrastructure promises availability as defined here.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365



Article about how to make food for vegans at Thanksgiving.

Sorry, but the pilgrims weren’t sitting around eating tofu and soy beans.

And the author smacks of another liberal asshole propagating the myth that colonists purposely passed out blankets carrying smallpox.

Stupid fuck.

As a carnivore whose foodie philosophy is “make things as delicious as possible, whatever it takes,” I used to see vegan dinner guests as something I had to workaround, and for that, I apologize. Vegan foodies can go on about how delicious soy bacon is, but as a cook who eats meat, I tended to think they were using a different measurement stick for “delicious.” I was selfishly aggravated at having to “dumb down” dishes and sacrifice taste for accommodation.

So you’ve decided you’d like to become a vegetarian, or maybe you already are. Welcome…

But I was wrong, my friends. I was SO wrong. A few alterations is all it takes to make my vegan friends look forward to dinner at my home, and given all we have to work with today, there’s no sacrifice. Swearsies. To make room at your Thanksgiving table for your vegan friends, survey your sides and decide if they can be split—to make half vegan and half non—or just veganize the whole pan. It’s not as hard as you think; in most cases it’s simply a matter of replacing one or two items.

Embrace Coconut Oil

If there is one ingredient I wish I had seized on earlier in my life, it’s coconut oil. I just refused to accept the truth of its awesomeness (probably because butter is my reason for being), but now it’s in my regular rotation. It’s so full of flavor, it’s become my go-to for popping popcorn, sauteing vegetables, even for searing meat. Rub coconut oil under raw chicken skin before cooking it and tell me I’m wrong. Veggie side dishes you were planning to prepare with butter can be made with coconut oil to fantastically delicious, vegan-friendly results. Perhaps your pan of stuffing can use coconut oil to saute instead of butter. Coconut oil doesn’t have the baking qualities of butter, but for stovetop dishes, it’s ideal.

Accept Veggie Stock Into Your Heart

As always, the Better than Bouillon folks have come to our rescue. Their dark, rich and not overly salty vegetable stock is so good it’s a seamless replacement for blond or dark meat stocks. For soups, for stuffing, for gravy, it’s a great consideration. I promise, you’re not losing anything. I use it voluntarily all the time, even though the beef and chicken stock sits next to it in the fridge. It has a distinctly rich, roasted veggie flavor that everyone will love.

Roast Your Nuts

Incorporating nuts into your cooking solves so many problems, and doing so can add protein to dishes that wouldn’t otherwise have it. Pecans can create a fulfilling, hearty crust in pies both sweet and savory, and they’ll most likely taste better than a vegan crust recipe that tries to replicate pastry. Pistachios can replace breadcrumbs to create breaded, crunchy coatings, and soaked, blended cashews can create creaminess instead of, you know,cream.

Caramelize to Create Richness

A well roasted root vegetable is a glorious and unique taste you can’t reproduce with meat. Its savory and sweet. Dehydrating or roasting concentrates flavors down to a truly intense tasting delight—try dehydrating plum or cherry tomato halves; they’re candy. If you feel like vegetables can’t shine on their own the same way as a roast, you’re wrong. Give them the same attention you would give meat and they’ll stand up for you.

Vegan Cheese Doesn’t Suck

I’m not a fan of frankenmeats or soy products, and I’ve never met a Tofurky I didn’t despise. Too often they just remind me of the food they’re poorly imitating. I’d rather find ways to let vegetables shine, but vegan cheese might be the exception. I particularly like the offerings at Trader Joe’s, and when whipped into something like mashed potatoes, or as melty cap atop a soup or a roasted tomato, shredded vegan cheese is almost perfect.

I am not vegan by any stretch of the definition, but living in Portland, Oregon has made me a bit…

Thanksgiving is our holiday, and it shouldn’t be about glorifying a bunch of lost white dudes with a penchant for handing out smallpox blankets. It should be about creating a family around you, embracing every stupid thing about them, and celebrating them with grotesque amounts of overeating. Of all the wacky and ridiculous quirks you have in your chosen family, choosing to forgo the most tasteless meat bird ever put on a plate surely can’t the weirdest, and it’s definitely easy to accommodate.


Let’s block ads! (Why?)