• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: warehouse

Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm

April 11, 2021   BI News and Info
forrester wave teradata is named leader.png?width=640&height=336&ext= Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm

Teradata (NYSE: TDC), a leading multi-cloud data warehouse platform provider, today announced that Forrester Research has named Teradata a Leader in “The Forrester Wave™: Cloud Data Warehouse, Q1 2021,” written by VP and Principal analyst Noel Yuhanna, on March 24, 2021. Forrester analyzed and scored the top 13 vendors in the Cloud Data Warehouse market according to 26 criteria.

“Over the past 12-18 months, Teradata has been laser-focused on our cloud capabilities and the performance of our cloud business,” said Steve McMillan, CEO of Teradata. “From shoring up our cloud credentials with key new executive appointments to significantly increasing our cloud R&D spend, our commitment to and investment in the cloud has successfully positioned Teradata as a modern, relevant cloud platform for our customers. We believe that this recognition from Forrester is another validation that our cloud-first agenda is winning in the market.”

Forrester’s analysis of Teradata in the Cloud Data Warehouse Wave evaluation is based on Vantage — a multi-cloud data warehouse platform that enables ecosystem simplification by connecting analytics, data lakes, and data warehouses. 

According to Forrester’s evaluation, “[Vantage] combines open source and commercial technologies to operationalize insights; solve business problems; enable descriptive, predictive, and prescriptive analytics; and deliver performance for mixed workloads with high query concurrency using workload management and adaptive optimization. Teradata Vantage integrates multiple analytic languages – including SQL, R, Python, SAS, and Java – and supports various data types, including JSON, Avro, Parquet, relational, spatial, and temporal.”

The Forrester report also notes that “Customers like Teradata Vantage’s hybrid cloud platform, reliability, data science, advanced analytics, and ease of management from an infrastructure perspective. Top use cases include BI acceleration, customer intelligence, real-time analytics, embedded data science functions, fraud detection, time-series analysis, data lake integration, data warehouse modernization, and data services.”

Read the complete The Forrester Wave™: Cloud Data Warehouse, Q1 2021 report here. 

With Vantage, enterprise-scale companies can eliminate silos and cost-effectively query all their data all the time. Regardless of where the data resides – in the cloud using low-cost object stores, on multiple clouds, on-premises, or any combination thereof – organizations can get a complete view of their business. And by combining Vantage with first-party cloud services, Teradata enables customers to expand their cloud ecosystem with deep integration of cloud-specific, cloud-native services.

Let’s block ads! (Why?)

Teradata United States

Read More

Unlocking Data Storage: The Traditional Data Warehouse vs. Cloud Data Warehouse

November 29, 2020   Sisense

We live in a world of data: There’s more of it than ever before, in a ceaselessly expanding array of forms and locations. Dealing with Data is your window into the ways data teams are tackling the challenges of this new world to help their companies and their customers thrive.

The data industry has changed drastically over the last 10 years, with perhaps some of the biggest changes happening in the realm of data storage and processing.

The datasphere is expanding at an exponential rate, and companies of all sizes are sitting on immense data stores. And where does all this data live? The cloud. 

Modern businesses are born on the cloud: Their systems are built with cloud-native architecture, and their data teams work with cloud data systems instead of on-premises servers.

The proliferation of cloud options has coincided with a lower bar to entry for younger companies, but businesses of all ages have seen the sense of storing their data online instead of on-premises.

The increased interest in cloud storage (and increased volume of data being stored) coincides with an increased demand for data processing engines that can handle more data than ever before.

The shift to the cloud has opened a lot of doors for teams to build bolder products and infuse insights of all kinds into their in-house workflows, user apps, and more.

The cloud is the future, but how did we get here?
Let’s dig into the history of the traditional data warehouse versus cloud data warehouses.

rise of the data team blog cta banner 770x250 1 Unlocking Data Storage: The Traditional Data Warehouse vs. Cloud Data Warehouse

Data warehouse vs. databases

The boosted popularity of data warehouses has caused a misconception that they are wildly different from databases. While the architecture of traditional data warehouses and cloud data warehouses does differ, the ways in which data professionals interact with them (via SQL or SQL-like languages) is roughly the same.

The primary differentiator is the data workload they serve. Let’s explore:

Data warehouse:
online analytical processing (OLAP)
Database:
online transaction processing (OLTP)
Write once, read many Write many, read many 
Best for large table scans  Best for short table scans 
Typically a collection of many data sources Usually one source that serves an application
Petabyte-level storage Terabyte-level storage 
Columnar-based storage  Row-based storage 
Lower concurrency   Higher concurrency 
Examples: Redshift, BigQuery, Snowflake  Examples: Postgres, MySQL

Source: https://www.sisense.com/blog/how-to-build-a-performant-data-warehouse-in-redshift/

Given that both data warehouses and databases can be queried with SQL, the skillset required to use a data warehouse versus a database is roughly the same. The decision as to which one to use then comes down to what problem you’re looking to solve.

If there’s a need for data storage and processing of transactional data that serves an application, then an OLTP database is great. However, if the goal is to perform complex analytics on large sets of data from disparate sources, a warehouse is the better solution.

Before we look at modern data warehouses, it’s important to understand where data warehouses started to see why cloud data warehouses solve many analytics challenges.

sisense guide Data Mgmt in the MultiCloud Analytics Era blog banner 770x250 1 Unlocking Data Storage: The Traditional Data Warehouse vs. Cloud Data Warehouse

Traditional vs. Cloud Explained

Traditional data warehouses

Before the rush to move infrastructure to the cloud, data being captured and stored by businesses was already increasing, and thus there was a need for an alternative to OLTP databases that could process large volumes of data more efficiently. The business began to build what are now seen as traditional data warehouses.

A traditional data warehouse is typically a multi-tiered series of servers, data stores, and applications.

While the organization of these layers has been refined over the years, the interoperability of the technologies, the myriad software, and orchestration of the systems make the management of these systems a challenge.

Further, these traditional data warehouses are typically on-premises solutions, which makes updating and managing their technology an additional layer of support overhead.

Cloud data warehouses

The traditional data warehouses solved the problem of processing and synthesizing large data volumes, but they presented new challenges for the analytics process.

Cloud data warehouses took the benefits of the cloud and applied them to data warehouses — bringing massive parallel processing to data teams of all sizes.

Software updates, hardware, and availability are all managed by a third-party cloud provider. 

Scaling the warehouse as business analytics needs grow is as simple as clicking a few buttons (and in some cases, it is even automatic).

The warehouse being hosted in the cloud makes it more accessible, and with a rise in cloud SaaS products, integrating a company’s myriad cloud apps (Salesforce, Marketo, etc.) with a cloud data warehouse is simple.

The reduced overhead and cost of ownership with cloud data warehouses often makes them much cheaper than traditional warehouses.

Cloud data warehouses in your data stack

We know what data warehouses do, but with so many applications that have their own databases and reporting, where does the warehouse fit inside your data stack? 

To answer this question, it’s important to consider what a cloud data warehouse does best: efficiently store and analyze large volumes of data. The cloud data warehouse does not replace your OLTP database, but instead serves as a repository in which you can load and store data from your databases and cloud SaaS tools.

With all of your data in one place, the warehouse acts as an efficient query engine for cleaning the data, aggregating it, and reporting it — often quickly querying your entire dataset with ease for ad hoc analytics needs. 

In recent years, there has been a rise in the use of data lakes, and cloud data warehouses are positioning themselves to be paired well with these. Data lakes are essentially sets of structured and unstructured data living in flat files in some kind of data storage. Cloud data warehouses have the ability to connect directly to lakes, making it easy to pair the two data strategies. 

A data-driven future powered by cloud data warehouse technologies

The three most popular cloud data warehouse technologies are Amazon’s Redshift, Snowflake, and Google’s BigQuery. They each handle the same workloads relatively well but differ in how computing and storage are architected within the warehouse.

While they’re all great options, the right choice will be based on the scaling needs and data type requirements of the business. Beyond that, the pricing structure for the three varies slightly, and based on the use case, certain warehouses can be more affordable than others.

As the number of cloud data warehouse options on the market grows, niche players will rise and fall in every industry, with companies choosing this or that cloud option based on its ability to handle their data uniquely well.

Whatever your company does and wherever you’re trying to infuse insights, be it into workflows or customer-facing apps, there’ll be a cloud option that works for you.

The future is in the clouds, and companies that understand this and look for ways to put their data in the right hands at the right time will succeed in amazing ways.

Adam Luba is an Analytics Engineer at Sisense who boasts almost five years in the data and analytics space. He’s passionate about empowering data-driven business decisions and loves working with data across its full life cycle.

Let’s block ads! (Why?)

Blog – Sisense

Read More

Clarifying Data Warehouse Design with Historical Dimensions

August 10, 2020   BI News and Info

We owe a lot to Ralph Kimball and friends. His practical warehouse design and conformed-dimension bus architecture are the industry standard. Business users can understand and query these warehouses directly and gain valuable insights into the business. Kimball’s practical approach focuses squarely on clarity and ease of use for the business users of the warehouse. Kudos to you and yours, Mr. Kimball.

That said, can the mainstay Type 2 slowly changing dimension be improved? I here present the concept of historical dimensions as a way to solve some issues with the basic Type 2 slowly changing dimension promoted by Kimball. As we will see, clearly distinguishing between current and past dimension values pays off in clarity of design, flexibility of presentation, and ease of ETL maintenance.

Warehouse facts are inherently historical since transactions happen on a transaction date, balances are kept on a balance date, and so on. The values of Dimensions are either static (date and time, limited code sets) or change slowly. Not every dimension change needs to be recorded as history, but many do. When dimensions change, how should it be handled?

Kimball’s general answer is to choose between the standard slowly changing dimension (SCD) Types 1, 2, and 3. For each column in the dimension table, a determination should be made to 1) overwrite the old value, 2) insert a new row in the dimension with a new dimension key to record the new value, preserving the old, or 3) copy the old value to a previous value column in the row.

SCD Type 1 is a simple overwrite, and SCD Type 3 is somewhat special-purpose and limited. The workhorse of dimension history is, therefore, SCD Type 2. It is made possible by the use of a surrogate key on the dimension rather than the natural key. Historical fact rows are linked through the surrogate key to the version of the dimension row that was current when the fact was recorded. Usually, dimensions containing Type 2 history have effective and expiration dates, as well as a current indicator, which must be maintained as Type 2 SCD rows are inserted.

Limitations of Type 2 SCD

Type 2 SCD is usually presented as one of several choices as to how history is stored – a somewhat technical distinction that can be hidden from the business users. A designer might get the sense that he could present the first version of the warehouse using SCD Type 1 overwrites and add Type 2 SCD history in a later version. Since no structural changes are required, it should be able to drop right in, but this is not the case.

Storing Type 2 history in a dimension table fundamentally changes what that dimension contains. That leads directly to user confusion and incorrect results.

Consider how we treat fact tables. Kimball makes a strong point that one must declare the grain of the fact table, stating precisely what a fact table record represents. He writes that the most common error he sees is to not declare the grain at the beginning of the design process. Further, while the grain may be equivalent to the primary key of the fact table, the grain is properly declared in business terms first. Once the business definition is clear, the dimensional keys used in the fact become obvious.

Should we not treat dimension tables in a similar fashion? We must know exactly what a row in a dimension table represents, in business terms. While the primary key will always be the surrogate key of the dimension, both designers and users should be clear about what each row in the dimension table represents.

For example, what does each row in a Dim_Customer table represent? If SCD Type 1 overwrites are in place, we can say that it represents the latest information for each customer. If SCD Type 2 inserts are in place, the row now represents customer information at a certain point in time. Therefore, a business user must be fully aware of the history technique before he can understand what he is looking at in that dimension.

We can imagine the business user who is attempting to answer the question, “How many different customers have we had?” A simple

SELECT COUNT(*) FROM Dim_Customer

provides that answer for an SCD Type 1 table, but one would need to use

SELECT COUNT(DISTINCT Customer_Id) FROM Dim_Customer

(assuming Customer_Id is a natural key of the customer from the operational system) or

SELECT COUNT(*) FROM Dim_Customer WHERE Current_Indicator = ‘Y’

to answer that simple question in an SCD Type 2 table.

SCD Type 2 introduces complications because it is trying to be both the current view and the historical view at the same time. It is like an actor on a stage who is trying to play two characters in the same scene. The audience is confused, and so is the actor.

Let’s examine another limitation of SCD Type 2. What happens if we wish to use current dimension values when examining historical facts? For example, we may wish to send emails to all customers who bought certain products in the previous quarter. We use the fact table to obtain the list of distinct Customer_Keys, but those keys refer to potentially historical records in the Type 2 Dim_Customer table. We cannot simply pull the email address from the dimension matching that key, because the customer may well have updated their email address since the last transaction we have recorded. In this case, we don’t want historical customer values; we want current customer values. Going back into the dimension to retrieve the current rows requires some tricky SQL that is likely beyond our business users.

One more irritation of SCD Type 2 arises with accumulating snapshot fact tables. This kind of fact table tracks statistics of a dimensional entity (e.g. a customer) as it changes over time. When our dimension is using SCD Type 2, there are several dimension keys that point to the same dimensional entity. We must ensure that we update the accumulating snapshot’s dimension key with the latest SCD Type 2 dimension key to avoid double-counting the rows.

Historical Dimensions Add Clarity

There’s nothing wrong with the basic SCD Type 2 technique of inserting new rows with a new surrogate key. The problems stem from having our only copy of current values mixed in with the historical values in the same table. So to clarify our design and solve the limitations of SCD Type 2 dimensions, we simply keep a copy of the current values separate from the historical and clearly label the two.

To that end, we make the following definitions:

Now, each Dimension that supports history will do so with a Historical Dimension table. The Historical Dimension is distinguished from the Dimension through a different table prefix (or suffix if that is your naming convention). Further, we make the logical distinction that a Key is used to link to a Dimension table, while an HKey is used to link to a Historical Dimension. This allows both a Key and an HKey to exist in the same fact table. One may prefer HistKey over HKey and HistDim rather than HDim; the important point is that the names unambiguously distinguish the two types of tables.

Dimensions are always maintained with overwrite logic, while Historical Dimensions track historical changes through SCD Type 2 inserts. Exactly how this is done is explained shortly.

Historical Dimensions contain every column found in the Dimension table, plus a few more (see Figure 1). Most importantly, they have their own surrogate primary key with a suffix of HKey. We also add an effective date and expiration date as well as the current indicator. The Historical Dimension keeps a complete set of current data (those rows have the current indicator set true), so it is a superset of the Dimension both in its structure and in its content. We may choose to prefix the Dimension attributes in the Historical Dimension with Hist_, to distinguish them from the current value columns in the Dimension table.

word image 1 Clarifying Data Warehouse Design with Historical Dimensions

Figure 1: Historical Dimensions contain all the columns of the Dimension, including its key.

Does keeping the current set of values in the Historical Dimension lead us back to the same issues as with SCD Type 2? No. The current values in the Historical Dimension are just the latest revision in a series of revisions for the Dimension. The separate copy of the always-current Dimension table makes all the difference.

Separating the tables prevents clouding the business user’s understanding of the dimension by sourcing current and historical values from the same table. They intuitively grasp that Dimensions keep current “as now” values and Historical Dimensions hold the “as of” values that associate with facts.

Historical Dimensions Add Flexibility

Fact tables can now include both the Key and the HKey to relevant dimensions. The Fact_Sales table, for example, would contain both Customer_Key and Customer_HKey (see Figure 2). If the user wishes to see “as was then” values, the join to HDim_Customer is made through Customer_HKey. If the user wishes to see “as is now” values, the join to Dim_Customer is made through Customer_Key. One could even join to both if it was necessary to compare “as was then” to “as is now” values. In the case that a particular dimension row has not been updated since the fact was recorded, both the Historical Dimension and the Dimension return the same data values.

word image 2 Clarifying Data Warehouse Design with Historical Dimensions

Figure 2: Users decide if they want sales transactions with historical values of the customer (join to HDim_Customer through Customer_HKey) or current values (join to Dim_Customer through Customer_Key).

Going back to the previous example, pulling the current email address from historical transactions is now easily handled by the BI front end software. Simply get the distinct list of email addresses from Dim_Customer for the fact table rows in question. Users readily understand this because they know that Dimensions always contain the current values.

Similarly, Historical Dimensions solve the problem of changing SCD Type 2 dimension keys in an accumulating snapshot table. These fact tables can now store the Dimension Key value, which always represents the current value and does not change. An accumulating snapshot fact table would not need to store the HKey value since potentially many Historical Dimension rows apply over the life of the accumulating snapshot fact row.

The Historical Dimension can be browsed on its own to track changes through time, right up the most current value. There is a link to the Dimension table through the Dimension Key, so there is no need to rely on a natural key such as Customer_Id to identify the same customer throughout its history. Relying on the standard key mechanism means such queries are easy for BI tools and users alike.

Historical Dimensions Ease the ETL Burden

One may wonder if committing to both SCD Type 1 for Dimensions and SCD Type 2 for Historical Dimensions adds to the complexity of the ETL layer. That is an appropriate concern since ETL can be the largest and most difficult technical portion of warehouse development.

Actually, making a consistent separation between current and historical dimensions also clarifies and simplifies the ETL process considerably. With a little setup, SCD Type 2 logic can be encapsulated in a single stored procedure that is called during the Dimension load. Write it once, call it for each Dimension, and never worry about it again.

What is involved in the afore-mentioned setup? First, we should have a way to identify all rows changed by the dimension load process. Kimball recommends an Audit Key in each fact table whose Audit Dimension tracks when the load started and finished, the number of rows inserted or updated, the number of rows rejected, and similar statistics. I have found the Audit Key concept useful for both facts and dimensions. If we do not have a similar key that tracks instances of a data load event, we could use a date changed timestamp if all rows contained the same value. As long as all inserted and changed rows in the same session have the same value, the requirement is met.

Second, define in metadata what kind of update should be performed for each dimension column. Each non-technical column should be marked as “Overwrite” (no need to keep a history of changes for this particular column) or “Insert” (if this column changes, keep its history). This can be done within a data modeling tool that supports custom attributes for columns. For example, in PowerDesigner, define an Extended Attribute for dimension table columns, then set the columns at design time. Next, export the values from the modeling tool to the database, in either a standard database table or your database’s extensible metadata tables.

With that setup in place, simply call the procedure to maintain history just before committing changes from the dimension load. The call might look like this:

EXECUTE Maintain_History (Dim_Table_Name, HDim_Table_Name, Audit_Key);

The Maintain_History procedure pseudocode logic is the following:

word image 3 Clarifying Data Warehouse Design with Historical Dimensions

The procedure can read the structure of the Dimension and Historical Dimension tables from the database catalog and execute dynamic SQL to process the changes. The call performs well since the Audit_Key is an indexed value that quickly identifies changed rows. The procedure does not perform a final commit transaction to the data – that is the responsibility of the calling Dimension load. This preserves the Historical Dimension load as part of the all-or-none load of the Dimension.

As a result, developers need only worry about getting Type 1 overwrite logic to work. The more difficult SCD Type 2 logic is completely abstracted away. Separating the history from the current allows us to put the power of SQL to use in identifying changed values and recording them as a separate and repeatable process.

Figure 3a shows two rows in the Dimension, and three in the Historical Dimension. John Doe has changed his email address on Feb 6 2019. Figure 3b shows the same two rows subject to more changes. We had entered an incorrect Birth Date for John Doe, while Mary Smith got married and changed her name and email address to reflect her new last name. The Maintain_History procedure would see the Birth Date was the only changed column for John Doe, and since the metadata indicates Overwrite, the Historical Dimension Birth Date column is overwritten. Mary’s changes to Customer_Name and Customer_Email are defined as Insert columns in metadata, so that triggers the insert of a new row to the Historical Dimension.

word image 4 Clarifying Data Warehouse Design with Historical Dimensions

Figure 3a: Key 101 has had a change of Customer Email value in the past.

Figure 3b: Result of Maintain_History procedure on highlighted Dimension changes. Key 101 has Birth Date overwritten, Key 102 gets a new row due to name and email change.

How to Keep the Structure of Dimensions and Historical Dimensions in Sync

We’ve seen that maintaining the Type 2 SCD logic can be done in a single stored procedure, but what about the burden of maintaining a duplicate set of columns as the structure of our Dimension table evolves over time? This can be a challenge to keep up with manually, but it can be automated. If we have a flexible data modeling tool that supports scripting, spend a little time to script the creation of the Historical Dimension from the Dimension. Or, write the code in SQL or your favorite scripting language to generate the required DDL. It’s not hard, and it only needs to be done once. An effort like this pays off quickly over the long life of the warehouse.

Add Historical Dimensions at Your Own Pace

Recall that moving a dimension from Type 1 SCD to Type 2 SCD was not as simple as it seemed. Though the structure of the dimension table does not change, the meaning of each row does. And that affects existing queries, as they now need to add filters or distinct clauses to get the same results as before. Since we have been allowing our users to query the tables directly, we can’t identify all the code that would need to change. Effectively, adding Type 2 history to an existing dimension is a breaking change.

In contrast, Historical Dimensions can be added gracefully when the design team is ready for it. Nothing about the use of the Dimension table or existing queries will change when a Historical Dimension is added. A new HKey only needs to be added to existing Fact tables. If history values are available, the Historical Dimension can be loaded with them, or it can start as a duplicate of the Dimension table and grow from there. The Maintain_History procedure can recognize an empty Historical Dimension, and copy all Dimension rows to it to “prime the pump”.

If adding the new HKey to the dimension presents practical issues, even that may be avoided. The Historical Dimension has value on its own as the record of changes to the Dimension. It can be browsed independently of any fact tables for analytical and audit purposes. Historical values from the fact table are also easily obtained by querying the Historical Dimension using the Key value, and constraining the fact’s main date value to between the effective date and expiration date of the Historical Dimension.

Historical Dimensions Compared to Kimball SCD 7 and Wikipedia SCD 4

Kimball’s Design Tip #152 refers to SCD Type 7 as dual Type 1 and Type 2 Dimensions. This sounds promising, but it has the critical flaw of not separating the two dimensions. The Current Product Dimension referred to is simply a view over the Product Dimension, where the Current Indicator is true. Renaming the columns in this current dimension view to have a Current_ prefix reinforces this notion. If the Type 2 historical dimension is considered the “real” dimension, the confusion over what the dimension means will linger over the design. A concept of a Durable Key is presented, though it is confusing because it is not the primary key of a table.

In general, using views to avoid duplicating current dimension rows is a questionable idea. Remember that the disk space taken by dimensions is a drop in the bucket compared to facts. Since we have automated the loading of the Historical Dimension, we are no more concerned about that duplication than we are with duplicated descriptions inside the dimension. Views have their own performance characteristics that vary among and within DBMSs. It seems best to keep views for renaming columns in role-playing dimensions and materialized views for aggregate usage.

The Wikipedia Slowly Changing Dimension article calls the history table SCD Type 4. (However Kimball’s SCD Type 4 is an entirely different technique of “Add Mini Dimension”). This technique seems to capture the flavor of the Historical Dimensions presented here but falls short in the implementation. In the example given, the newly added historical row’s surrogate key is used to update the key of the current dimension. This would prevent us from using the stable key of the current dimension to provide current dimension values to any fact row. The power of the clear separation between current and history is lost.

Put Historical Dimensions to Work

Star schema designs are effective because they are clear and easy to query. SCD Type 2 tables muddy that clarity if they are not overtly labelled as historical. Once we take the step to separate current values in the Dimension from historical values in the Historical Dimension, simplicity is maintained. Users will be grateful to choose whether to see historical or current values in their queries. That choice might even be crucial for some business requirements, and developers will appreciate the “write once, use many” approach to history maintenance that Historical Dimensions provides. Project managers will welcome the ability to add history to existing current-only dimensions without breaking existing queries.

Add Historical Dimensions to your data warehouse toolkit, and your data won’t be forced to live in the past.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Locus Robotics raises $40 million to take its warehouse robots global

June 2, 2020   Big Data

Warehouse robotics startup Locus Robotics today announced it has raised $ 40 million, the bulk of which will be put toward accelerating R&D and the company’s expansion into new markets, including in the EU, where it opened a new headquarters. CEO Rich Faulk says Locus also intends to launch strategic reseller partnerships throughout 2020, following a year in which its number of customer deployments passed 50.

Worker shortages attributable to the pandemic have accelerated the adoption of automation. According to ABI Research, more than 4 million commercial robots will be installed in over 50,000 warehouses around the world by 2025, up from under 4,000 warehouses as of 2018. In China, Oxford Economics anticipates 12.5 million manufacturing jobs will become automated, while in the U.S., McKinsey projects machines will take upwards of 30% of such jobs.

Locus’ autonomous robots — LocusBots — can be reconfigured with virtually any tote, box, bin, or container or with peripherals like barcode scanners, label printers, and environmental sensors designed to expedite order processing. They work collaboratively with human associates, minimizing walking with a UI that recognizes workers’ Bluetooth badges and switches to their preferred language. On the backend, Locus’ LocusServer orchestrates multiple robots such that they learn efficient travel routes, sharing the information with other robots and clustering orders to where workers are. As orders come into warehouse management systems, Locus organizes them before transmitting back confirmations — providing managers real-time performance data, including productivity, robot status, and more.

 Locus Robotics raises $40 million to take its warehouse robots global


VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

When new LocusBots are added to the fleet, they share warehouse inventory status and item locations. Through LocusServer, they detect blockages and other traffic issues to improve item pick rates and order throughput. Locus’ directed picking technology actively directs workers to their next pick location, letting them select their own pace while optionally accepting challenges through a gamification feature that supports individual, team, and shift goals plus events and a mechanism managers can use to provide feedback. In addition, Locus’ backend collates various long-tail metrics, including hourly pick data, daily and monthly pick volume, current robot locations, and robot charging levels.

Locus offers a “robot-as-a-service” program through which customers can scale up by adding robots on a limited-time basis. For a monthly subscription fee, the company sends or receives robots to warehouses upon request, and it provides those robots software and hardware updates, in addition to maintenance.

Locus claims that its system — which takes about four weeks to deploy — has delivered a 2 to 3 times increase in productivity and throughput and 15% less overtime spend for brands that include Boots UK, Verst Logistics, Ceva, DHL, Material Bank, Radial, Port Logistics Group, Marleylilly, and Geodis. The company’s robots passed 100 million units picked in February, and in April, UPS announced that it would be piloting Locus machines in its own facilities.

“COVID-19 has dramatically accelerated trends that have been taking shape over several years in the logistics market, including the movement to collaborative robotics to deal with the labor crisis,” Faulk told VentureBeat via email, adding that the company’s annual recurring revenue increased 300% in 2020 year-over-year. “Our pipeline is expanding weekly with major global brands needing to automate prior to peak season to address the labor gap they will face this year.”

 Locus Robotics raises $40 million to take its warehouse robots global

Zebra Technologies’ Zebra Ventures led this series D investment in Wilmington, Massachusetts-based Locus, with participation from existing backers, including Scale Venture Partners. This round brings the Quiet Logistics spinout’s total raised to over $ 105 million as it looks to expand its workforce from more than 120 people to 200 by 2021.

Locus competes in the $ 3.1 billion intelligent machines market with Los Angeles-based robotics startup InVia, which leases automated robotics technologies to fulfillment centers. Gideon Brothers, a Croatia-based industrial startup backed by TransferWise cofounder Taavet Hinrikus, is another contender. And then there’s robotics systems company GreyOrange; Otto Motors; and Berkshire Grey, which combines AI and robotics to automate multichannel fulfillment for retailers, ecommerce, and logistics enterprises. Fulfillment alone is a $ 9 billion industry — roughly 60,000 employees handle orders in the U.S., and companies like Apple manufacturing partner Foxconn have deployed tens of thousands of assistive robots in assembly plants overseas.

Sign up for Funding Weekly to start your week with VB’s top funding stories.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

4 Ways a Warehouse Management Solution (WMS) Provides Competitive Advantage

February 27, 2020   NetSuite
gettyimages 1163524651 4 Ways a Warehouse Management Solution (WMS) Provides Competitive Advantage

Posted by Abby Jenkins, Product Marketing Manager for Inventory & Order Management, Supply Chain & WMS

As e-commerce sales continue to increase, drop shipping becomes the norm and overall expectations around shipping make customer satisfaction more difficult to achieve, competitive advantage for companies that sell or distribute products ultimately comes down to a warehouse’s ability to deliver orders faster and more accurately than ever. Using spreadsheets and paper to track inventory as it moves through the warehouse will no longer suffice.

Deploying a Warehouse Management System (WMS) as part of a larger fulfillment and warehousing strategy helps growing businesses optimize warehouse operations in four key areas:

  • Inbound Logistics
  • Inventory Visibility
  • Outbound Logistics
  • Mobile Scanning
  1. Inbound Logistics

As inventory arrives at the warehouse, a WMS helps expedite the receiving process. Using a set of automated processes and pre-defined rules, users are guided through the receiving and putaway process, including recommending bin locations and putaway rules, ensuring inventory is quickly and accurately processed so that it can be used to fulfill orders.

Using a mobile device to receive items automatically assigns the items lot number, serial number, bin location and inventory status as they are received. With NetSuite WMS, inventory is automatically allocated to outstanding open orders and can be taken directly to the packing locations, decreasing fulfillment time and handling costs of putting items away and then picking them to fill an order.

      2. Inventory Visibility

With a WMS, inventory is tracked as it moves through the warehouse using barcodes, and lot and serial tracking, ensuring accurate visibility of inventory levels (including allocated stock), orders and fulfillment status at all times. It also gives traceability of perishable goods and can automate expiration dates based on receipt date.

You can also schedule regular cycle counting within the WMS as a means of checks and balances. Cycle counting can be scheduled based on category so that high-moving, or high-value inventory is counted with more frequency than lower value items. By assigning cycle counts to staff based on their function or warehouse location, you can ensure regular counts are completed without disrupting daily operations. Automated processes help by automatically sending reminders prompting staff to complete the required counts, including which products to count and how frequently.

With NetSuite WMS, you get a holistic view of inventory across all physical locations. By setting up physical locations hierarchically and using sub-locations, you can view inventory levels per physical location as well as enterprise wide, allowing for more efficient order fulfillment and replenishment.

      3. Outbound Logistics

Using pick and pack logic and strategies available with a WMS, users are guided through the order fulfillment process, ensuring inventory is used when and how you want it to be. Defining a wave release strategy, selecting single or multiple picking type, and setting your wave status further customizes and controls the way orders are processed.

If you’re managing inventory across multiple warehouse locations, NetSuite WMS gives you visibility of inventory by location and allows you to define fulfillment and shipping rules. Ensuring whole orders are being shipped from a single location and orders are assigned to the warehouse closest to the destination minimizes shipping costs and simplifies order orchestration, something that would be impossible to do manually.

       4. Mobile Scanning

One of the best ways to increase efficiency in warehouse operations is to integrate a mobile app. The combination of a wireless mobile device and barcode scanning helps to automate processes, such as shipping and receiving, putaway, and picking and packing, and it increases overall operator efficiency. Because information is being recorded in real-time with the mobile app, it’s easy to provide a real-time picture of inventory and ensures accuracy throughout the supply chain.

The NetSuite WMS mobile app was designed with the warehouse manager in mind. It has a clean, clear and easy-to-navigate interface that helps reduce the time operators spend completing everyday tasks, such as inbound, inventory and outbound processing tasks. Through the task manager, you have the ability to direct users to perform specific tasks, such as putaways and picking, based on pre-configured strategies defined during the setup of the WMS system, ensuring inventory is allocated according to plan and not haphazardly. With the use of GS1 barcode scanning you can easily enter items for inbound, inventory or outbound processing.

To further customize the user experience in your warehouse, you can create custom processes within the app. Customizing things like defining default values, hiding/displaying fields and adding fields for data capture can be done from the floor, without any technical expertise. You can export and import custom processes to other devices, ensuring the right users have access.

Making the Move from Manual to Automated

By automating processes, improving operational efficiencies and reducing handling time, a WMS optimizes day-to-day warehouse operations, ensuring you can deliver on customer expectations quickly and accurately.

Read more about how NetSuite companies are using WMS to create their optimal warehouse, here. And learn more about the latest additions to the WMS features and functionality in the NetSuite 2020.1 Release

Let’s block ads! (Why?)

The NetSuite Blog

Read More

4 Ways a Warehouse Management Solution (WMS) Provides Competitive Advantage

February 27, 2020   NetSuite
gettyimages 1163524651 4 Ways a Warehouse Management Solution (WMS) Provides Competitive Advantage

Posted by Abby Jenkins, Product Marketing Manager for Inventory & Order Management, Supply Chain & WMS

As e-commerce sales continue to increase, drop shipping becomes the norm and overall expectations around shipping make customer satisfaction more difficult to achieve, competitive advantage for companies that sell or distribute products ultimately comes down to a warehouse’s ability to deliver orders faster and more accurately than ever. Using spreadsheets and paper to track inventory as it moves through the warehouse will no longer suffice.

Deploying a Warehouse Management System (WMS) as part of a larger fulfillment and warehousing strategy helps growing businesses optimize warehouse operations in four key areas:

  • Inbound Logistics
  • Inventory Visibility
  • Outbound Logistics
  • Mobile Scanning
  1. Inbound Logistics

As inventory arrives at the warehouse, a WMS helps expedite the receiving process. Using a set of automated processes and pre-defined rules, users are guided through the receiving and putaway process, including recommending bin locations and putaway rules, ensuring inventory is quickly and accurately processed so that it can be used to fulfill orders.

Using a mobile device to receive items automatically assigns the items lot number, serial number, bin location and inventory status as they are received. With NetSuite WMS, inventory is automatically allocated to outstanding open orders and can be taken directly to the packing locations, decreasing fulfillment time and handling costs of putting items away and then picking them to fill an order.

      2. Inventory Visibility

With a WMS, inventory is tracked as it moves through the warehouse using barcodes, and lot and serial tracking, ensuring accurate visibility of inventory levels (including allocated stock), orders and fulfillment status at all times. It also gives traceability of perishable goods and can automate expiration dates based on receipt date.

You can also schedule regular cycle counting within the WMS as a means of checks and balances. Cycle counting can be scheduled based on category so that high-moving, or high-value inventory is counted with more frequency than lower value items. By assigning cycle counts to staff based on their function or warehouse location, you can ensure regular counts are completed without disrupting daily operations. Automated processes help by automatically sending reminders prompting staff to complete the required counts, including which products to count and how frequently.

With NetSuite WMS, you get a holistic view of inventory across all physical locations. By setting up physical locations hierarchically and using sub-locations, you can view inventory levels per physical location as well as enterprise wide, allowing for more efficient order fulfillment and replenishment.

      3. Outbound Logistics

Using pick and pack logic and strategies available with a WMS, users are guided through the order fulfillment process, ensuring inventory is used when and how you want it to be. Defining a wave release strategy, selecting single or multiple picking type, and setting your wave status further customizes and controls the way orders are processed.

If you’re managing inventory across multiple warehouse locations, NetSuite WMS gives you visibility of inventory by location and allows you to define fulfillment and shipping rules. Ensuring whole orders are being shipped from a single location and orders are assigned to the warehouse closest to the destination minimizes shipping costs and simplifies order orchestration, something that would be impossible to do manually.

       4. Mobile Scanning

One of the best ways to increase efficiency in warehouse operations is to integrate a mobile app. The combination of a wireless mobile device and barcode scanning helps to automate processes, such as shipping and receiving, putaway, and picking and packing, and it increases overall operator efficiency. Because information is being recorded in real-time with the mobile app, it’s easy to provide a real-time picture of inventory and ensures accuracy throughout the supply chain.

The NetSuite WMS mobile app was designed with the warehouse manager in mind. It has a clean, clear and easy-to-navigate interface that helps reduce the time operators spend completing everyday tasks, such as inbound, inventory and outbound processing tasks. Through the task manager, you have the ability to direct users to perform specific tasks, such as putaways and picking, based on pre-configured strategies defined during the setup of the WMS system, ensuring inventory is allocated according to plan and not haphazardly. With the use of GS1 barcode scanning you can easily enter items for inbound, inventory or outbound processing.

To further customize the user experience in your warehouse, you can create custom processes within the app. Customizing things like defining default values, hiding/displaying fields and adding fields for data capture can be done from the floor, without any technical expertise. You can export and import custom processes to other devices, ensuring the right users have access.

Making the Move from Manual to Automated

By automating processes, improving operational efficiencies and reducing handling time, a WMS optimizes day-to-day warehouse operations, ensuring you can deliver on customer expectations quickly and accurately.

Read more about how NetSuite companies are using WMS to create their optimal warehouse, here. And learn more about the latest additions to the WMS features and functionality in the NetSuite 2020.1 Release

Let’s block ads! (Why?)

The NetSuite Blog

Read More

High Customer Satisfaction Led to Teradata’s Leadership Distinction in Q4 Big Data Warehouse Landscape Report by The Information Difference

January 30, 2020   BI News and Info

Teradata (NYSE: TDC), the industry’s only Pervasive Data Intelligence company, today announced it has been recognized with the highest technology score in the Big Data Warehouse Landscape Q4 2019 report by The Information Difference, issued Jan. 21, 2020. This year’s report marks the 9th time that Teradata has been included.
 
Five major vendors were evaluated in the report, which represents the market in multiple dimensions based on customer set, customer satisfaction, maturity of the technology, data warehouse revenue, size of partner ecosystem, geographic coverage and more. Teradata ranked highest in the technology dimension, compared to IBM, Magnitude, Microsoft and Oracle.

 
“A significant part of the ‘technology’ dimension scoring is assigned to customer satisfaction, as determined by a survey of vendor customers,” said Andy Hayler, Analyst at The Information Difference. “In this annual research cycle the vendor with the happiest customers was Teradata. Our congratulations to them. This certainly confirms what we regularly hear from Teradata customers about the Vantage platform, which is clearly leading the data warehouse market.”
 
“Customer satisfaction is the ultimate measure of a company’s success and we are delighted that our customers have once again put Teradata at the top of this ranking,” said Chris Twogood, SVP of Global Marketing at Teradata. “We built Vantage to help our customers move from analytics to answers with a cloud-forward platform that unifies analytics, data lakes and data warehouses. This report confirms that we are delivering on that promise.”
 
Teradata Vantage, the company’s flagship product, delivers a modern cloud architecture that enables companies to start small and elastically scale compute or storage as business needs increase. With support for low-cost object stores and seamless integration of analytic workloads, customers can deploy Vantage across public, multi-cloud or hybrid cloud environments, paying only for what they use while leveraging 100% of their available data for analytic answers.
 
To learn more about this report visit: www.informationdifference.com
 
 

Let’s block ads! (Why?)

Teradata United States

Read More

Top 5 Metrics for Measuring Warehouse Productivity

December 19, 2019   NetSuite
gettyimages 1136341080 Top 5 Metrics for Measuring Warehouse Productivity

Posted by Abby Jenkins, Product Marketing Manager for Inventory & Order Management, Supply Chain & WMS

Companies that make, sell or distribute products typically require receipt, storage and fulfillment of goods into, within, and out of a warehouse. Without a system specifically design for warehouse management in place, this can quickly become a manual, costly and inefficient process that won’t support a fast-growing business. 

Over 90% of warehouses have adopted some sort of warehouse management system.

Optimizing day-to-day warehouse operations is critical to increasing warehouse productivity.

According to a 2019 Warehouse Education and Research Council 2019 Operational Benchmarking Report, the top 5 KPIs for benchmarking your warehouse productivity are:

  1. Order Picking Accuracy (percent by order) – Incorrect order picking results in increased labor costs, inaccurate inventory counts, delayed shipments and decreased customer satisfaction if the error is not caught before the order ships.

At best-in-class operations, 99.89% of orders are picked correctly.

  1. Average Warehouse Capacity Used – The average capacity used over a certain period of time – this is a key space utilization KPI.

Best-in-class companies utilize on average 92.54% of warehouse space.

  1. Peak Warehouse Capacity Used – Peak capacity is another KPI for tracking how well a warehouse uses its space during its busiest times.

Best-in-class companies utilize 100% warehouse space during peak times.

  1. On-time Shipments – This is a delicate balance to achieve without compromising order accuracy. Today’s consumer is largely brand agnostic and concerned with getting the product they want in the quickest and cheapest way possible.

Best-in-class companies ship 99.7% of orders on time.

  1. Inventory Count Accuracy by Location – Understanding total inventory available is important but equally important is how that inventory is distributed across multiple locations so that orders are routed to the proper location for fulfillment and out of stocks are not incurred.

Best-in-class companies have a 99.9% inventory count by location.

Click here to see the chart to gauge warehouse performance.

Nearly half of all warehouses are still relying on a paper-based picking method to fulfill orders (meaning Excel or some spreadsheet method).

Warehouse operations can be improved with intelligent pick-and-pack capabilities and a wave-release process, using mobile RF barcode scanning, automating cycle counting and integrating with shipping systems, all of which elevate visibility, accuracy and efficiency.

NetSuite Warehouse Management System (WMS) offers industry leading warehouse management functionality. Specifically designed with the warehouse manager in mind, NetSuite WMS functionality is aimed at improving the user experience and warehouse processing operations. Because NetSuite WMS is built into NetSuite – utilizing core ERP locations, items, bins, inventory and transactions – product companies don’t have to worry about integration efforts or data synchronization.

Reduce operating expenses, improve inventory visibility, achieve better labor management and increase customer service by implementing a WMS system. Learn more about how companies are incorporating a WMS into their overall fulfillment strategy.

Posted on Wed, December 18, 2019
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

Read More

AI startup Gather uses drones and computer vision for warehouse inventory

August 17, 2019   Big Data
 AI startup Gather uses drones and computer vision for warehouse inventory

Gather, a company that uses autonomous drones for warehouse inventory, launched out of stealth today. Founded in 2017, the company of about 10 employees is based in Pittsburgh.

Gather’s founding team is made up of Robotics Institute at Carnegie Mellon University graduates, including cofounder and chief robotics officer Sankalp Arora, whose work together with a team at the Office of Naval Research on autonomous helicopters won the 2018 Howard Hughes Award.

Earlier this year, Gather closed a $ 2.5 million funding round to bring its products to market and grow its computer vision and software offerings.

Gather supplies software for the autonomous operation of drones that can connect with existing warehouse management systems and IoT devices such as motion sensors. Computer vision is then used to scan and count inventory.

The solution is about 60% cheaper than traditional methods that rely on people alone, Gather said. Gather software is currently used in an undisclosed number of warehouses, but an air cargo company was able to reduce inventory time from 8 hours to 15 minutes, and take inventory 10x more quickly than two employees with a forklift in another warehouse.

“People never have to leave their desk to go out and do an inventory,” Arora told VentureBeat in a phone interview. “We’re not outfitting these things with any specific hardware. It’s a drone that you can go buy at Best Buy.”

Gather was founded in January 2017 by Arora, Daniel Maturana, Geetesh Dubey, and Robb Myer, an entrepreneur-in-residence at CMU.

Since Gather can connect with IoT devices, drones can deploy anytime a motion sensor is triggered. Arora argues a team of drones can multiply savings for warehouses by eliminating the need to install cameras for security or surveillance.

While drones using Gather’s computer vision can read barcodes and count boxes, an added bonus is providing hourly inventory image sets that warehouse managers can use to assess inventory reports or look back through records.

Cameras are also used to help the drones avoid running into objects.

“Our semantic mapping enables a single camera to map identified objects up to a range of 150 meters,” Arora said. “So what that means is we can follow rules of engagement in these facilities. For example, if a manager tells us ‘I don’t want this drone  within 15 meters of a forklift,’ we can make a map these using semantic cues from known objects and map these objects by following rules of engagement.”

Computer vision by companies like Autodesk and Indus.ai are being applied in industrial environments to improve safety and compliance with regulations. Such systems may also play a role in the future to orchestrate the movement of people and machines as more robots enter warehouses. A number of challenges stand in the way of fully autonomous warehouses, Arora said.

“I think a good way to put it is you have to deal with unpredictable decision makers, as well as flexibility of the environment as you are making these things automated. And that is where I guess the final hill for robotics is,” he said. “You’ve got amazing robots making our cars, doing manufacturing on manufacturing lines, but as as soon as the environment becomes dynamic, it changes and you have to interact with humans or you have to be flexible given the environment, then the perception challenges associated with it. And the planning challenge associated with it becomes several orders of magnitude more difficult.”

Recent robotics ramp-ups include Brain Corp’s initiative to bring robots to warehouses and a $ 46 million funding round for Fetch Robotics in June, while DHL introduced a $ 300 million investment last year to quadruple robots in its facilities in North America.

Amazon, who rolled out its new Pegasus sorting robot in June, predicted earlier this year that fully autonomous Amazon warehouses are at least 10 years away.

The $ 2.5 million funding round was led by Expa and joined by Dundee VC, Bling Capital, Comeback Capital, XRC Labs, Summer League Ventures, and Plexo Capital.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

5 Advantages of Using a Redshift Data Warehouse

March 26, 2019   Sisense

Choosing the right solution to warehouse your data is just as important as how you collect data for business intelligence. To extract the maximum value from your data, it needs to be accessible, well-sorted, and easy to manipulate and store. Amazon’s Redshift data warehouse tools offer such a blend of features, but even so, it’s important to understand what it brings to the table before making a decision to integrate the system.

So, what are the benefits of switching data storage to Redshift data warehouses? In addition to its significant storage capacity, the data warehousing solution delivers several key benefits that make it an intriguing and possibly ideal choice for business intelligence. These are five of the biggest advantages of using Redshift for your business intelligence needs.

It Offers Significant Query Speed Upgrades

With larger datasets—especially when reaching petabytes of magnitude—querying experiences an understandable lag in speed. However, most database and warehouse solutions today offer the ability to process requests and other functions in parallel. Redshift data warehouse architecture has clocked in at some of the fastest general and query speeds.

Comparing Redshift vs Hadoop, for instance, shows that overall the former is nearly 10 times faster than the latter. In some query tests, Redshift database easily outstrips Hadoop in returning results. Amazon’s Massively Parallel Processing lets BI tools with the Redshift connector process several queries across multiple nodes simultaneously while reducing workloads.

Redshift 770x250 770x250 5 Advantages of Using a Redshift Data Warehouse

It Focuses on Ease of Use and Accessibility

Even though it’s more than 30 years old, MySQL (and other SQL-based systems) remains one of the most popular and easily usable interfaces for database management. Its simple query-based system makes platform adoption and acclimation a breeze. Instead of building a completely new interface that requires significant resources and time to learn, Amazon chose to create a platform that works much like MySQL, to great effect.

While it does change some aspects, Redshift keeps much of what makes MySQL, including the back-end tools that work with PostgreSQL, JDBC, and ODBC drivers while making it easy to connect with most business intelligence tools. Moreover, it can easily connect with other existing tools and provides an easy learning curve for new administrators and even end-users.

It Provides Fast Scaling With Few Complications

Redshift is cloud-based and hosted directly on Amazon Web Services, the company’s existing cloud infrastructure. One of the biggest benefits this provides Redshift is a flexible architecture that can scale in seconds to meet changing storage demands. A major issue facing organizations with rapidly changing data requirements is that scaling can be both costly and complex.

Thanks to AWS, Redshift can be scaled up or down by quickly activating individual nodes of varying sizes. This scalability also means cost savings, as companies aren’t forced to spend money maintaining servers that are unused or to quickly purchase expensive server space when the need arises. This is especially useful for smaller companies which experience significant growth and must scale their existing solutions.

It Keeps Costs Relatively Low

Amazon Web Services bills itself as a cost-effective solution for companies of all sizes. In keeping with the company line, Redshift provides a similar pricing model that delivers greater flexibility while empowering companies to keep a tighter watch on their data warehousing costs. This pricing capability comes as a result of the company’s cloud infrastructure, and its ability to keep workloads to a minimum on most nodes.

Additionally, organizations can choose which type of pricing model they prefer: on-demand or reserved instances. The first is generally more appealing to smaller companies, or those with lighter data warehousing needs, while the latter offers a more stable ecosystem for data storage. More than simple dollars and cents, this pricing flexibility means you can always ensure that scalability is possible and straightforward.

It Gives You Robust Security Tools

Massive data sets often contain sensitive data, and even if they don’t, they still hold important information about their organizations. As such, the right data warehouse solution should have powerful protection tools to lock down data. Redshift presents a few different encryption and security tools that make protecting warehouses even easier.

This includes a VPC for network isolation as well as different access control tools that give you more granular management capabilities. Additionally, Redshift includes SSL encryption for data in transit, and AWS’ S3 servers offer both client- and server-side encryption, giving you greater control over when data is viewable and accessible.

Choosing the Right Warehouse

Building a successful BI ecosystem for your organization begins with data. By choosing a warehouse that meets your requirements and grants you the flexibility to grow and scale, you can give your business intelligence even greater value while concurrently deriving much better insights and analytics.

Redshift 770x250 770x250 5 Advantages of Using a Redshift Data Warehouse

Tags: Data Preparation | Data Warehouse

Let’s block ads! (Why?)

Blog – Sisense

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited