• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: rules

Codd’s Twelve Rules

April 21, 2020   BI News and Info
SimpleTalk Codd’s Twelve Rules

Let me take you back to those thrilling days of yesteryear when the relational model and relational databases were the fads that all hip kids were getting into. Yes, we have fads in IT. Sometimes it’s a new programming language that will solve all of our problems. Think about PL/1, Ada, and dozens of others that have come and gone. Sometimes it’s a new technique that will solve our problems. Structured Programming, Software Engineering, Agile, and others I have forgotten. We used to joke that the current “fad du jour” was like teenage sex. Everyone claimed they knew what it was; they were already doing it and had always done it, but they were a little weak on some of the details.

Software vendors were particularly bad about this. Any database product they had was relational! Just ask the salesperson; you know salespeople never lie. Dr. Codd, the creator of relational databases, was bothered by this, so he set up a set of 13 rules that a product had to match to be considered relational. The paper is referred to as “Codd’s Twelve Rules” or sometimes as “Codd’s Twelve Commandments”, despite the fact there were actually 13 of them because the numbering started with zero. In particular, Rule 12 was created to prevent some of this marketing hype.

I’ll begin by going through the rules.

Rule 0: The Foundation Rule

For any system that is advertised as, or claimed to be, a relational database management system, that system must be able to manage databases entirely through its relational capabilities. This means you don’t get to use a host language to do anything inside the database. If you remember the original database systems, they had to be embedded in COBOL or some other host language. They were essentially collections of procedure calls for data access, rather than what we would think of is a database today.

Rule 1: The Information Rule

All information in a relational database is represented explicitly at the logical level and in exactly one way – by values in tables.

Notice the phrase “logical level” and that there’s nothing about how physical storage is done. There’s nothing about pointer chains. There’s nothing about the physical position of data in arrays, files, or anything else. Remember that Dr. Codd started as a mathematician, so he thought abstractly. Today, this rule is worded as “scalar values in the columns of rows in tables”, but various vendor implementations allow non-scalar data in the table.

There is also some confusion about “atomic” versus “scalar”; a value is atomic if it cannot be further decomposed without losing information. For example, an American shoe size of 8½ B is a complete measurement. Either the length (8½) or the width (B) by itself has lost information about the actual shoe size. The term scalar has to do with scales and measurements, and I have written separate articles on this topic.

Rule 2: The Guaranteed Access Rule

Each and every datum (atomic value) in a relational database is guaranteed to be logically accessible by resorting to a combination of table name, primary key value and column name.

Please note Dr. Codd was still talking about a “primary key” at this time. This particular term also is part of SQL. We still agree that to have a table, you must have a key because that’s how data is located in RDBMS.

But the concept of a primary key was a leftover from sequential files; it’s so ubiquitous that we didn’t even think about it. To use a sequential file, the data has to be sorted on a key. Obviously, you can have only one such sort key. A bit later, Dr. Codd realized that all keys are keys at the logical level. To paraphrase “Animal Farm”, we realized you couldn’t have some keys “more equal than others”, so there was no need to make one special.

Rule 3: Systematic Treatment of NULL Values

NULL values (distinct from the empty character string or a string of blank characters and distinct from zero or any other number) are supported in fully relational DBMS for representing missing information and inapplicable information in a systematic way, independent of data type.

This is another concept that’s grown since Codd’s original paper. In the second version of the relational model, Dr. Codd defined “applicable” and “inapplicable” forms of NULL. An applicable NULL, a type A NULL, is used when the entity has the attribute, but we don’t know what its value is right now. An inapplicable NULL, a type I NULL, is used when the entity simply doesn’t have the attribute, so its value cannot ever be resolved.

Because these two NULLs did not come along until after we had gotten pretty far into SQL, the language only has one NULL which serves both purposes. Unfortunately, it gets more complicated after that.

The first complication is that Dr. Codd defined the NULL as having no data type. When you’re implementing a compiler in a strongly typed language like SQL, you really need a data type for all of the data elements. This is why we have to write CAST (NULL AS < data type>) to play safe, and why being NULL-able is a part of the declaration of a column in the DDL.

The second complication is how a NULL sorts. Is it always higher than any value in the datatype of the column? Always lower? Originally, each vendor could have his own rules. Currently, the ANSI/ISO standards let the programmer explicitly determine how NULLs are sorted.

The third complication is that even if you have declared everything in your schema to be NOT NULL, SQL will generate NULLs. There is no escape! The OUTER JOINs create NULLs in the unpreserved table, to pad out the rows in the result. In some cases, returning an empty set will be shown as a NULL. We also have NULLs created by an OLAP operation. OLAP is worth an article in its own right, but for now, consider the basic operations that are options in the GROUP BY clause; CUBE, ROLLUP, and GROUPING SET operations.

Looking at just the ROLLUP as a representative example, this lets us write what we would have called a “control break” report in the old days. These are the basic reports that would list the details, and then at each level of the hierarchy, print out aggregate functions (usually summations) between the groups. For decades, this was the major use of computers and green bar continuous printer paper and data processing.

But unlike a report, the result of a query has to return a table. And by definition, the rows in the table all have the same structure. This is one that’s easier to see an example. Assume I have a simple table of sales data:

CREATE TABLE Sales

(state_code CHAR(2) NOT NULL,

city_name VARCHAR(20) NOT NULL,

sales_amt DECIMAL(12, 2) NOT NULL,

PRIMARY KEY (state_code, city_name));

My OLAP query gives me the totals at the state level and the city level:

SELECT state_code, city_name, SUM(sales_amt) AS sales_total

FROM Sales

GROUP BY ROLLUP(state_code, city_name);

The resulting output from this query looks like this:

AL Birmingham 200.00 ← detail rows
…
AL NULL 4500.00 ← state-level total rows
….
NULL NULL 115000.0 ← grand total row

Much like the OUTER JOINs, the OLAP operations create NULLs which were not in the original data. As is usual with grouping in SQL, the NULLs are treated as a group in their own right. But there’s a serious problem here. What does the NULL at these various levels mean? It’s not quite the same as an A-type or I-type missing value. For example, we were looking at a row that gives the city level totals. There are no NULLs at this level, but when I go to the state level, my NULL represents shorthand for a subset of the names of cities. This is a very different meaning than the NULLs which were used as a scalar value.

As expected, the NULLs form their own group. However, I would tell the difference between a created NULL and one that was in the original data? The solution which we got in the SQL: 2003 standard, is a function:

GROUPING (< column reference 1>, .., < column reference n>)

This function returns a vector where a one means the NULL was created by the query, and otherwise a zero.

SELECT CASE GROUPING (state_code) WHEN 1

THEN ‘State Totals’ ELSE state_code END state_level,

CASE GROUPING (city_name) WHEN 1

THEN ‘City Totals’ ELSE city_name END city_level,

SUM(sales_amt) AS sales_total

FROM Sales

GROUP BY ROLLUP(state_code, city_name);

Another consideration with NULLs is when we use them with temporal data. The ISO-8601 standards are based on a half-open interval model of time. That means we always know the starting point in time of an interval, but there may or may not be an ending point yet. If the event is still ongoing, we can’t terminate it. This is why there is no such thing as 24:00:00 Hrs today because it is actually 00:00:00 Hrs of the next day.

The classical way of modeling ongoing events is to use NULL for the ending timestamp. This lets us write things like COALESCE(event_ending_timestamp, CURRENT_TIMESTAMP) in our queries. But it also means that a NULL can be interpreted as a symbol for “eternity”, which we don’t have in SQL.

Rule 4: Dynamic Online Catalog Based on the Relational Model:

The database description is represented at the logical level in the same way as ordinary data, so that authorized users can apply the same relational language to its interrogation as they apply to the regular data.

SQL products are generally pretty good about this, but you have to be careful that you don’t mix data and metadata. There are schema information tables defined in the standards. In the real world, each vendor has its own schema information tables, which also include things that are particular to its implementation.

Rule 5: The Comprehensive Data Sublanguage Rule

A relational system may support several languages and various modes of terminal use (for example, the fill-in-the-blanks mode). However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings and that is comprehensive in supporting all of the following items:

1. Data definition.

2. View definition.

3. Data manipulation (interactive and by program).

4. Integrity constraints.

5. Authorization.

6. Transaction boundaries (begin, commit and rollback).

Again, SQL products are pretty good about this. Unless you’re a bit older, you might not have ever seen QBE (query by example) or other tools. SQL is divided into three sublanguages; DDL (data definition language), DCL (data control language) and DML (data manipulation language). When we were designing the sublanguages in the SQL standards, we very deliberately decided that the languages would be LALR(1). This it has to do with the type of grammar used by the parsers in a computer language. If you like having flashbacks to your compiler writing classes as a freshman, you could look up one of these two articles:
https://en.wikipedia.org/wiki/LALR_parser
https://web.cs.dal.ca/~sjackson/lalr1.html

Rule 6: The View Updating Rule

All views that are theoretically updatable are also updatable by the system.

This requirement simply does not work. Updatable views are incredibly complicated, and it’s not practical to try to implement it. In particular, you might want to read a book by Chris Date, “View Updating & Relational Theory: Solving The View Update Problem” (ISBN 9781449357849).

Vendors have pretty much settled for an updatable VIEW having to map to distinct rows in a single base table. Trying to do an “un-JOIN” on a VIEW built on more than one table is problematic. If the VIEW has computations, trying to come up with the inverse functions is not always possible. Throw in some CASE expressions, and it’s an incredible mess.

The WITH CHECK OPTION has been in the language almost since the beginning, but it’s not well understood. Again, it’s one of those things which is easier to explain with an example. Let’s assume we have a skeleton table of our salespersons:

CREATE TABLE Salespersons

(emp_id CHAR(10) NOT NULL PRIMARY KEY,

emp_name VARCAR(25) NOT NULL,

emp_city_name VARCAR(25) NOT NULL,

..);

Now create a VIEW from the salesman in Austin, Texas:

CREATE VIEW Austin_Salespersons

AS

SELECT emp_id, emp_name, emp_city_name

FROM Salespersons

WHERE emp_city_name = ‘Austin, TX’;

This is a perfectly good VIEW of all the guys working in Austin, Texas. Since it obviously has one row in the VIEW mapping to one row in the base table, we can write:

UPDATE Austin_Salespersons

SET emp_city_name = ‘Boston, MA’;

Oops! We just moved everybody out of Texas to Massachusetts. However, if we had put a WITH CHECK OPTION on the end of the CREATE VIEW statement, then the WHERE clause is reevaluated in such an update is disallowed because it moves rows out of the VIEW.

Rule 7: Possible for High-Level Insert, Update, and Delete

The capability of handling a base relation or a derived relation as a single operand applies not only to the retrieval of data but also to the insertion, update and deletion of data.

Obviously, insertion update and deletion can be done on base tables in every SQL product. However, the most complicated options are in the MERGE statement.

Rule 8: Physical Data Independence

Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representations or access methods.

Again, SQL is pretty good about this feature. The older network and hierarchical databases required rewriting the code when a new index, hash table or whatever was added. You had to open a given index and explicitly use it specifically.

Rule 9: Logical Data Independence

Application programs and terminal activities remain logically unimpaired when information-preserving changes of any kind that theoretically permit unimpairment are made to the base tables.

Notice the idea is that the information is preserved, even when its representation is altered. For example, if you change a value from INTEGER to a DECIMAL(n, 0) data types, and don’t permit fractional data values, then you should expect either representation to behave the same way.

Again SQL is pretty good about this feature. A select statement runs as written, even after I’ve altered the tables. This concept was very hard for traditional programmers who grew up with traditionally compiled languages. If I compiled a Fortran program with a particular Fortran compiler, the executable code would always be the same. The same query run on the same version of SQL can produce different execution plans, depending on the other users (can we share data among them?), the current data types (I can alter a column within its data type family?), changed access methods and data statistics.

Rule 10: Integrity Independence

Integrity constraints specific to a particular relational database must be definable in the relational data sublanguage and storable in the catalog, not in the application programs.

Data integrity is part of the DDL in SQL. SQL engines allow column level constraints (CHECK(), DEFAULT, NOT NULL) and simple DRI table constraints (REFERENCES). In the ANSI/ISO standards, we also have a CREATE ASSERTION statement that is like a schema level CHECK() constraint. All constraints are true for an empty table, but an assertion can handle empty tables and multiple tables.

Rule 11: Distribution Independence

The end-user must not be able to see that the data is distributed over various locations. Users should always get the impression that the data is located at one site only.

This is a little more vendor dependent, but SQL has no syntax to locate the physical storage of the data. When we look at RAID storage, we have no idea where is the physical data is kept or even which copy of the data we are currently using. But if we mean a truly distributed database, in the sense of the Cloud or other network configurations, they didn’t exist when Dr. Codd set up these rules. In the years since then, managing a distributed database has become a separate topic in itself.

Rule 12: The Non-subversion Rule

If a relational system has a low-level (single-record-at-a-time) language, that low level cannot be used to subvert or bypass the integrity rules and constraints expressed in the higher-level relational language (multiple-records-at-a-time).

We have cursors for this! In fact, the SQL model is based on the IBM magnetic tape drive functions. We didn’t have much choice when we were setting up the first RDBMS products since the first products were built on top of existing file systems and hardware. There was not much parallelism, column-oriented data storage, advanced hashing, or any of the other advances in computer science and hardware. Later we added the SQL/PSM, PL/SQL, Informix 4GL, T-SQL in other procedural languages that would work on a particular vendor’s product.

While it is possible in SQL products to turn off constraints, the rule has always been that at the end of the session, the database must return to a consistent state with all of the constraints turned back on. The ANSI/ISO SQL Standard allows you to declare constraints as either being deferred initially, deferred later or not deferrable at all. SQL Server has commands to turn the constraints on and off explicitly.

Deferring constraints is usually done when you need to get an initial state in the database that has to do with self-referencing constraints. Such things are referred to as “the Garden of Eden constraints”, and they can be a bit tricky. For example, if a constraint on a new row to a table has to refer back to rows that already exist in the table. But if you’ve just created the table, there are no rows to reference! So we need to turn off this constraint, insert an initial row, and then turn it back on.

Conclusion

One of our problems in IT is that terms drift. We are not as bad as politics, but it can get pretty bad if you don’t have some kind of guidepost. I think that Dr. Codd did a pretty good job of preserving a definition of the relational model. And while it took us a little while, I think SQL does a good job of meeting his goals as well.

References:

1. Codd, E. F., “Is Your DBMS Really Relational?”, ComputerWorld (1985-10-14).

2. Codd, E. F., “Does Your DBMS Run By the Rules”, ComputerWorld (1985-10-21).

3. Codd, E. F., “The Relational Model for Database Management: Version 2”, ISBN 978-020114192-4.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

California’s data privacy rules get clearer

February 16, 2020   Big Data
 California’s data privacy rules get clearer

On Friday, February 7, the California Office of the Attorney General (CAG) published a “notice of modifications” to the California Consumer Privacy Act (CCPA), followed by an update on Monday, February 10. Although the CCPA is now law, the rulemaking process is still ongoing, with a final draft of the law expected sometime before the anticipated enforcement date of July 1, 2020. The CAG is now accepting public comments on these proposed modifications until Tuesday, February 25.

While the latest update doesn’t provide us with the final regulations, it offers much needed clarity in several key areas.

1. The scope of data & businesses subject to CCPA processes is clearer

One of the critical lessons from December’s CCPA hearings was that the law required further clarification on terms essential to the operationalization of the CCPA. This month’s updates do a decent job of alleviating some of the uncertainty by providing definitions, examples, and additional clarifying language. Some highlights include:

Clarification on the definition of “personal information.” A new section titled “Guidance Regarding the Interpretation of CCPA Definitions” (§ 999.302) has been created. Currently, there’s only one subsection (a), which defines what qualifies as personal information (PI) under the CCPA using IP addresses as an illustration. The key takeaway is that whether data is classified as PI depends on if it is — or can be — linked to a consumer or household. Given the title of the section, other terms may be clarified in this fashion at a later point.

New communication methods for accepting data requests are specified. Section 999.312, “Methods for Submitting Requests to Know and Requests to Delete,” now clarifies that businesses should consider making consumer requests for data available through “the methods by which it primarily interacts with consumers.” Subsection (a) states that online-only businesses need only provide an email for customers to submit requests to know. The language around how to accept delete requests, however, remains largely the same.

Exclusions now exist for fulfilling consumer requests to know. New language in subsection (c) of § 999.313, “Responding to Requests to Know and Requests to Delete,” excludes businesses from having to search for PI to fulfill a consumer request for data if several conditions are met. The business must not maintain the PI in a searchable or reasonably accessible format, and the PI must only be maintained for legal or compliance purposes. Finally, the business cannot sell the PI or use it for commercial purposes. If a business informs consumers of these reasons, then it can be exempt from having to include PI meeting these conditions within a consumer request for data.

Explicit details now exist for how service providers can use PI. Section § 999.314 (Service Providers) goes into greater detail about what any entity defined as a service provider can and cannot do with PI. Specifically, subsection (c) has been completely rewritten to list five exceptions where service providers are permitted to retain, use, or disclose personal information. One of the exceptions allows service providers to use data to improve the quality of their services or clean and augment data.

In addition to these highlights, the proposed changes also elaborate on the scope of the CCPA as it applies to entities like authorized agents, who can make requests on a consumer’s behalf, as well as data brokers and other third parties.

2. We now have more details on how opt-out requests and do not track will work

New language in § 999.315, “Requests to Opt-Out” suggests that regulators intend for consumer opt-out requests to be as painless as possible. Subsection (c) seems to be worded explicitly to address the problem of UX “dark patterns” within privacy controls, stating “… a business shall not utilize a method that is designed with the purpose or substantial effect of subverting or impairing a consumer’s decision to opt-out.” Given that dark patterns are suspected of helping companies circumvent parts of the GDPR, the new CCPA subsection makes sense, though it’s not clear how it’ll be enforced.

Additionally, subsections (d)(1) and (d)(2) discuss the role that global privacy controls, such as browser settings like do not track, will play in opt-out requests. Privacy controls that function in accordance with the CCPA are to be treated as opt-out requests, even in the instance they conflict with a consumer’s business-specific settings. Businesses, however, may notify consumers of the conflict and how it might impact their service.

3. The rules on how to provide consumer notices have new detail

The CCPA requires that companies inform consumers about company practices as well as customer’s rights at specific points in the customer’s interaction. The new modifications have specified that online CCPA-required notices should follow industry-recognized accessibility standards like the Web Content Accessibility Guidelines, version 2.1.

Sections for specific notices, like the notice at collection of personal information (§ 999.305) and the notice of right to opt-out of sale (§ 999.306), now include details about where notices should be displayed. For example, the modifications in § 999.305 (4) state that if PI collection happens in a mobile application for a purpose not reasonably expected by a consumer, a “just-in-time” notice with a summary of the collected PI should be provided. Modifications in § 999.306 say that opt-out notices within mobile applications may be provided through a link in the application’s settings menu. For a more thorough understanding of how notice requirements have changed, organizations should take a deeper look at these sections.

What’s next for privacy compliance?

From now until February 25, the CAG will be accepting comments on the current round of CCPA modifications via email or mail. From there, we’ll likely see the process for the final rulemaking record begin. Once the AG prepares the final rulemaking record and the Final Statement of Reasons, these will be submitted to the Office of Administrative Law (OAL) for approval. After 30 working days, the OAL will decide whether to approve the record. If approved, the final record will go to the California Secretary of State. All of this will likely take place sometime before July 1, leaving any stragglers with little time to make significant changes.

Although the CCPA is currently on everyone’s mind, the California law is merely a bellwether of an emerging change taking place within the compliance landscape. Beyond the CCPA, organizations should watch for The California Privacy Rights Act of 2020 (CalPRA), dubbed “CCPA 2.0.” The group Californians for Consumer Privacy is hoping to get the act on November’s ballot. Nebraska, New York, and a handful of other states also seem intent on joining California in implementing privacy legislation. Finally, developments in other countries — India, for example — illustrate how the demand for privacy legislation is growing abroad.

Privacy compliance does seem to be a trend that’s here to stay. Organizations that take the time to thoroughly ensure CCPA compliance today will likely have the systems in place needed to ensure compliance with future legislation.

Michael Osakwe is a tech writer and Content Marketing Manager at Nightfall AI.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

How to deal with procedurally generated rules and patterns?

February 10, 2020   BI News and Info
 How to deal with procedurally generated rules and patterns?

I’m trying to procedurally generate replacement rules of the following form

X[{a,a}] -> X1
X[{a,b}]X[{b,a}] -> X2
X[{a,b}]X[{b,c}]X[{c,a}] -> X3
X[{a,b}]X[{b,c}]X[{c,d}]X[{d,a}] -> X4

Also, I know the number of maximum required replacement rules in advance.


Implementing {a1___, a2___, a3___, … } instead of {a,b,c, … }, my pseudocode reads

X[{a[1],a[2]}] X[{a[2],a[3]}]... X[{a[n-1],a[n]}] X[{a[n],a[1]}] -> Xn
Product[ X[{a[i],a[i+1]}], {i,1,n-1} ] X[{a[n],a[1]}] -> Xn

which translated into actual Mathematica code gives:

MyRule[n_] := 
  a___ Product[ 
    Subscript[X, {Symbol["μ"<>ToString[i]<>"___"], Symbol["μ"<>ToString[i+1]<>"___"]}], 
  {i,1,n-1}] Subscript[X, 
      {Symbol["μ"<>ToString[n]<>"___"], Symbol["μ"<>ToString[1]<>"__"]}
  ] :> a Subscript[X, n]

However,

Subscript[X, {a, b}] Subscript[X, {b, a}] /. MyRule[2]

shows that the rule definition is not working properly, allegedly because of a conflict in the way the dummy indices are written and some issues with their ‘Symbol’ character but I don’t really get it. how could I fix this?

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

Web Designing Rules

December 7, 2019   Humor
web designing 696x464 Web Designing Rules

A Few Most Important and Never Ignorable Rules, While Making a Website

Web Designing is a very vast field and requires a broad view and special attention from us. We are perplexed when we are going to make a website because we have a lot of variety and choices. Sometimes we are entangled in a very complicated situation when we want to work in our style, but the web designing principles don’t let us do so. Here is the exam of our skills that how we attain our goals within the rules required by the web designing process. 

Although there is a large number of such rules that lead you towards a successful website, yet we shall mention a few of them.

The Best and Unique Web Designing Rules

Here are some of the significant and essential rules to be followed in web designing. You are recommended never to ignore them.

Keep it Planned

First of all, make up your mind about what you are going to do and what is the way you are going to perform your assigned task. How would you adorn it to be a catchy eyed and an attractive website?

How have you planned it to make it a helping hand in your business? 

Keep reminding yourself of all the strategies and policies you’ve planned to go through a perfect website. It’s only you who knows better about the requirement and taste of your website. Make catchy words and images that would attract the visitor to your website. You can use premade images from other websites, but your work is preferred and appreciated by everyone that visits your site.

Make a detailed homework of relevant websites. This will help you create new ideas and also let you avoid the chance of similarity with other websites. If you fill up your website with unnecessary and unwanted stuff, it’s just the way you are going to keep your visitor away from your site. 

Content is King

This is no doubt the most important and the best rule and source, that generates traffic to your site. That is the content; you adorn your website with. Here are an example and a proverb that proves its authenticity. 

“If your website is body, the content is always the soul of it.” 

You should always keep it in your mind that everyone that moves to your website is just in search of some information. If you have provided the information up to date, then it’s a positive sign to attract the visitors. In this regard, other websites can uniquely guide you that what is the specialty or the feature that drags you there and what makes you force to click there?

Make it Meaningful

The third important rule that needs your urgent and crucial attention is to sketch your website. But keep it in your mind that you are making it with a sense, taste, and feel of a visitor. Place yourself in the shoe of the visitor. Now you would be able to decide that if you had been a visitor, then what could you want to see on your website?

Make the website pages considerable and comfortable for an average mind visitor. Consider in your mind the future needs and requirements that you may need in the future. Make sure to provide clear navigation ways, so that the visitors should not feel bothered. Provide the contact information in detail so that the visitors might contact you if they need it anytime. If you are offering some services on your site, then make sure to put the relevant stuff in the relevant page. Make it easy and convenient for your visitors to avail and get your services and product.

Create an Effective Appearance and Great Color Scheme 

The color scheme and the appearance strike an emotional reaction on the mind of the visitor, so it should be used to make people feel a different way. Different colors might arouse mixed feelings. The psychology of color planning on your site is used by professionals and time-proven by professional web designers. It helps to manipulate different messages to the users. Don’t make overly complicated and confusing designs, but it should be reflecting the business message and the service or product that you are offering. 

Usability 

There is every type of user who is going to visit your website. Therefore your website should be intuitive and easy to use. The navigation portion should be easily accessible on every page of the site. The user should have to click more than four times to go to a specific page. As a general rule, there are two or fewer clicks to take the user to any page of the site. A sophisticated navigation structure might quickly frustrate your visitors, and they will quite soon if they can’t find what they are seeking for.

How Should a Site Structure Be?

This is the aspect that can be seen by your visitors, and thus, it is crucial to have a quality website. The foundation of your site lying within the programming and coding structure. A reliable, and robust site should have all screen sizes, must be openly usable in all web browsers, computable with all operating systems and without adding and additional plugins such as Flash player, etc. You can lose a large number of potential customers if your website isn’t compatible with multiple systems.

Have A Good Host 

Your host or the service provider is also extremely valuable in presenting a quality website. Your host is recommended to have excellent uptime, which is generally required as (99.9 %+) and boast fast transfer speeds. It should not take much time in loading and landing. The users have a vast choice, and they won’t wait any longer on your site and move to some other site if your landing page doesn’t appear soon. 

Marketing

A good site should always promote a good SEO practice and rank well within all the major search engines like Google. Don’t forget to combine those key aspects, and you will undoubtedly own an excellent website. If you ever need help with ensuring that you have a quality and great website. The revived media is always there to help you in this regard.

Collect Ideas by Visiting Other Big Websites

If you visit other relevant websites, you can define new ideas. And make new policies and strategies that would lead you to a perfect website. It would help you find many tips and collect great ideas that would help you make an attractive and remarkable website. The big sites spend a lot of money testing the best location for various crucial elements of the page. You can get the advantage by picking out the best design elements from the top 3 or 4 sites relevant to your niche. Make them work into your website design. It would let you have a head start in your website design. And you will admit that it’s one that’s having a well worth. 

Finally speaking if you follow all the rules mentioned above. You’ll create a website that generates not only traffic to your website. But also helps to achieve the purpose you have made your website for.

Let’s block ads! (Why?)

Mefunnysideup

Read More

TechNOVA 2019: The Connected Customer Rules the Game

July 22, 2019   TIBCO Spotfire
TIBCOrulesofthegame 696x521 TechNOVA 2019: The Connected Customer Rules the Game

Recently, I had the pleasure of attending TechNOVA Connected Customer Conference (#Customer19) in London, where heavy hitters, such as Hilton, Deezer, Bulb, Nectar, and start-ups like Feedr and Drover talked about improving customer experience through hyper-personalization. Most of these companies were inspired by disruptive companies like Alibaba, Spotify, and Netflix who have revolutionized their respective industries all through personalizing their customer journeys. 

Every business aims to give a seamless and awesome experience to its customers. However, each customer is an individual and engages differently with the same product. As you can see in the example below from Amazon, there are people who love a particular Lego product and there are people who hate it. 

 TechNOVA 2019: The Connected Customer Rules the Game

So how do companies ensure they provide an exceptional experience with their products for every customer?

Glocalization—yes, it’s a thing

That’s the reason nearly every company at the conference talked about the importance of having an integrated customer experience across all the services and products they offer. Glocal (aka glocalization) was an important topic which means the “global distribution of a product or service that is tailored to local markets.” 

For example, Bulb, a renewable energy provider, talked about how they make the customer journey simple across 3 main touchpoints: 

  1. Simple interface with fast turnaround time on quotes
  2. Keep in constant contact with customers to prevent overdue bills
  3. Automate all customer touchpoints with basic AI to keep the customer happy

Geraldine de Boisse, Chief Product Officer at BULB, said: “Communication, using simple words, is the key. It is also not about keeping the price low every time. It is about the holistic value proposition.”

Another company, Feedr, who provide personalized food offerings, talked about how people are willing to share their data with you if you provide them value. Feedr has emotionally connected users to the app by giving personalized experiences, even though individuals are eating the same food. This again shows you that it’s not about the end outcome. The most important part is the experience and how a company takes you through the journey.

Using Data to Create Personalized Recommendations

What is common across these companies is the use of data to create personalized experiences. They all capture data at different levels such as directly from a consumer, capturing from social media, or examining clicks on an application. They then feed the data into an algorithmic engine to derive meaningful insights to create what we call “personalized recommendations.” 

How TIBCO can help you create personalized customer experiences

TIBCO colleagues Chris Lowe and Yana Chalyovska spoke at the event and talked about why Connected Intelligence is at the heart of every great customer experience. TIBCO not only enables organizations to get a 360-degree view of customer data in real time, but it also links companies directly to people’s emotional state. Drawing upon some personal stories in banking and travel, they demonstrated how TIBCO technology can create an emotional connection with the end customer.

In order to deliver an amazing experience that continuously delights your audience, you need to know where your customers’ satisfaction levels so you can identify the most appropriate engagement. TIBCO makes all of this possible through the Connected Intelligence platform. 

Get in touch today to find out more.

Also, listen in to this webinar to understand how you can deliver engagement, experience, and empathy to maximize value for your customers.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Cat Bouncer Rules

May 21, 2019   Humor

When you go over to a friend’s house and learn the guest list needs  approval by the house cat.

9ueSah6 Cat Bouncer Rules“Truth.  All guests must be approved by the cat.”
Image courtesy of https://imgur.com/gallery/jScUw6m.

Advertisements

Let’s block ads! (Why?)

Quipster

Read More

Microsoft Dynamics 365 Webinar: When to use Business Rules vs JavaScript vs Workflows vs Custom Code to achieve your Business Goals

March 23, 2019   Microsoft Dynamics CRM

shutterstock 181432508 Converted Microsoft Dynamics 365 Webinar: When to use Business Rules vs JavaScript vs Workflows vs Custom Code to achieve your Business Goals

One of the great things about Microsoft Dynamics 365 is that it can be configured to meet YOUR company’s needs. How are you configuring your Dynamics system?

Do you struggle with choosing the proper type of component to use when configuring your Dynamics 365 system? Do you often ask “What is the best way to do this” or “What is the right way to do this”? Do you have a “go-to method” that you use because it’s “what you know”?

Join our webinar on Thursday, April 11th at 11:00 AM ET

CLICK HERE TO REGISTER!

In this webinar, we will discuss different business scenarios and explain when and why to use Business Rules vs JavaScript vs Workflows vs Custom Code for each. You will learn some of the nuances as well as Best Practices surrounding each of these components. You should walk away with a base knowledge and general understanding of each, giving you the ability to start applying the concepts to your own Dynamics organization.

Cant attend live? You should still register! We will be sending out a recording to all registrants after the webinar.

Beringer Technology Group, a leading Microsoft Gold Certified Partner specializing in Microsoft Dynamics 365, CRM for Distribution, Office 365,  Managed IT Services, Backup and Disaster Recovery, Cloudand Unified Communication Solutions.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Trump administration defends FCC’s repeal of net neutrality rules

October 14, 2018   Big Data

(Reuters) — The Trump administration defended the Federal Communications Commission repeal of landmark open internet rules known as net neutrality, urging a federal appeals court to reject a challenge.

In a 167-page court filing late on Thursday, the Justice Department and FCC urged the court to reject the suit filed by 22 states, the District of Columbia, Mozilla, Vimeo, public interest groups and local governments.

The Justice Department said the suit offers “no substantial reason to second-guess the commission’s decision to eliminate rules that the agency has determined are both unlawful and unwise.”

The FCC voted 3-2 in December along party lines to reverse rules adopted in 2015 that barred internet service providers from blocking or throttling traffic, or offering paid fast lanes, also known as paid prioritization.

Under the net neutrality rules, internet service providers have to treat all data fairly. Supporters of the rules fear providers could otherwise censor some data or raise costs for connectivity for some users.

The FCC also sought to pre-empt states from setting their own rules governing internet access.

A group of Democratic members of Congress, including Representative Nancy Pelosi and Senator Chuck Schumer, major cities, including New York, Boston and Chicago, back the states’ challenge, while trade groups representing major cable and mobile phone companies support the FCC action.

The Justice Department earlier this month filed suit to block California’s state net neutrality law from taking effect in January.

The net neutrality repeal was a win for providers like Comcast, AT&T and Verizon Communications, but was opposed by internet companies like Facebook, Amazon.com, and Alphabet, which say the repeal could lead to higher costs.

The U.S. Senate voted in May to reinstate the net neutrality rules, but the measure is unlikely to be approved by the House of Representatives.

The FCC in December handed internet providers sweeping powers to recast how Americans use the internet, as long as they disclose changes. The new rules took effect in June but providers have made no changes.

The Justice Department said the transparency rules “discourage broadband providers from engaging in harmful practices by reducing their incentives and ability to do so” and suggested it would allow “the market to prompt broadband providers to take corrective measures.”

Supporters of net neutrality argue that in some places, consumers do not have a choice among high-speed broadband providers.

The Justice Department brief also argues that banning paid prioritization may be “economically inefficient,” while suggesting that blocking or throttling will not happen because providers do not have an “economic incentive” to do so.

The U.S. Court of Appeals for the District of Columbia will hold oral arguments on the case on Feb. 1.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Data Management Rules for Analytics

June 29, 2018   Sisense

With analytics taking a central role in most companies’ daily operations, managing the massive data streams organizations create is more important than ever. Effective business intelligence is the product of data that is scrubbed, properly stored, and easy to find. When your organization uses raw data without proper management procedures, your results suffer.

The first step towards creating better data for analytics starts with managing data the right way. Establishing clear protocols and following them can help streamline the analytics process, offer better insights, and simplify the process of handling data. You can start by implementing these five rules to manage your data more efficiently.

1. Establish Clear Analytics Goals Before Getting Started

As the amount of data produced by organizations daily grows exponentially, sorting through terabytes of information can become problematic and reduce the efficiency of analytics. Such large data sets require significantly longer times to scrub and properly organize. For companies that deal with multiple streams that exhibit heavy bandwidth, having a clear line of sight towards business and analytics goals can help reduce inflows and prioritize relevant data.

It’s important to establish clear objectives for data and create parameters that filter out data points that are irrelevant or unclear. This facilitates pre-screening datasets and makes scrubbing and sorting easier by reducing white noise. Additionally, you can focus even more on measuring specific KPIs to further filter out the right data from the stream.

banner blog 2 Data Management Rules for Analytics

2. Simplify and Centralize Your Data Streams

Another problem analytics suites face is reconciling disparate data from multiple streams. Organizations have internal, third-party, customer, and other data that must be considered as part of a larger whole instead of viewed in isolation. Leaving data as-is can be damaging to insights, as different sources may use unique formats or different styles.

Before allowing multiple streams to connect to your data analytics software, your first step should be establishing a process to collect data more centrally and unify it. This centralization makes it easier to input data seamlessly into analytics tools, but also simplifies the methodology for users to find and manipulate data. Consider how to set up your data streams best to reduce the number of sources to eventually produce more unified sets.

3. Scrub Your Data Before Warehousing

The endless stream of data raises questions about quality and quantity. While having more information is preferable, data loses its usefulness when it’s surrounded by noise and irrelevant points. Unscrubbed data sets make it harder to uncover insights, properly manage databases, and access information later.

Before worrying about data warehousing and access, consider the processes in place to scrub data to produce clean sets. Create phases that ensure data relevance is considered while effectively filtering out data that is not pertinent. Additionally, make sure the process is as automated as possible to reduce wasted resources. Implementing functions such as data classification and pre-sorting can help expedite the cleaning process.

4. Establish Clear Data Governance Protocols

One of the biggest emerging issues facing data management is data governance. Because of the sensitive nature of many sources—consumer information, sensitive financial details, and so on—concerns about who has access to information are becoming a central topic in data management. Moreover, allowing free access to datasets and storage can lead to manipulation, mistakes, and deletions that could prove damaging.

It’s vital to establish clear and explicit rules about who can access data, when, and how. Creating tiered permission systems (read, read/write, admin) can help limit the exposure to mistakes and danger. Additionally, sorting data in ways that facilitate access to different groups can help manage data access better without the need to give free rein to all team members.

5. Create Dynamic Data Structures

Many times, storing data is reduced to a single database that limits how you can manipulate it. Static data structures are effective for holding data, but they are restrictive when it comes to analyzing and processing it. Instead, data managers should place a greater emphasis towards creating structures that encourage deeper analysis.

Dynamic data structures present a way to store real-time data that allows users to connect points better. Using three-dimensional databases, finding methods to reshape data rapidly, and creating more inter-connected data silos can help contribute to more agile business intelligence. Generate databases and structures that simplify accessing and interacting with data rather than isolating it.

The fields of data management and analytics are constantly evolving. For analytics teams, it’s vital to create infrastructures that are future-proofed and offer the best possible insights for users. By establishing best practices and following them as closely as possible, organizations can significantly enhance the quality of the insights their data produces.

banner blog 2 Data Management Rules for Analytics

Let’s block ads! (Why?)

Blog – Sisense

Read More

“Gotchas” with Business Rules in Dynamics 365

June 15, 2018   Microsoft Dynamics CRM
people 3370833 960 720 300x225 “Gotchas” with Business Rules in Dynamics 365

Business Rule functionality was a welcome addition to the arsenal of customization tools in Microsoft Dynamics 365. It allows greater flexibility on entity forms without the need to completely rely on JavaScript development. However, there are some less known pitfalls that may leave you scratching your head wondering why the Business Rule you created had no effect.

In this blog, we share a list of “gotchas” for Business Rules in case you are stuck.

JavaScript interference

Before designing a Business Rule for a particular entity, make sure to check if any existing JavaScript on the form will interfere with your Business Rules. JavaScript may accidentally trigger your Business Rules prematurely or cause other unexpected behavior. If JavaScript already exists on the form, it may make sense to continue leveraging JavaScript instead of Business Rules to manage behavior on the entity form. This can avoid any conflicts from the get-go, but it may also be easier to support going forward since all the behavior is managed in one place. Just remember to take into account the expectations of form behavior for users on mobile devices.

OnChange behavior is not triggering

Do not use Business Rules to trigger an OnChange event. This is by Microsoft’s design, so the system does not accidentally get lost in an infinite loop.

Mismatch of field properties

If you have a Business Rule that is supposed to take a field’s value to update another field, it may not work. There may be no obvious indication of why it didn’t work (e.g. error message).

In this scenario, check the properties of the fields in question. If there is a mismatch in field data type, or if one field does not match the field length of the other, then the Business Rule will not work. Simply update one of the fields to match the other field’s data type and length and test the Business Rule again.

Check if all fields are on the entity form

If you are encountering issues with your Business Rules, it makes sense to check if all the fields involved in the conditions are present and published on the entity form. This may happen in a scenario where you are hiding a supporting field that shouldn’t necessarily be displayed to the user.

Check the scope of the Business Rule

Make sure to verify the scope of the Business Rule before activating it. This setting can be found in the top right corner of the Business Rule creation window. There may be scenarios where you do not want the Business Rule to trigger for multiple entity forms.

Conclusion

Hopefully the above tips may have resolved an issue with your Business Rules. Be sure to share your tips in the comments below if you have any other experiences with Business Rules!

For more helpful Dynamics 365 tips, be sure to subscribe to our blog!

Happy Dynamics 365’ing!

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited