Tag Archives: References
Using Connection References with Power Automate and Common Data Service
If you have not heard of Connection References yet, they are definitely worth looking into. There are a lot of great blogs out there already, such as our blog introducing the feature, as well as our Docs information.
At a high-level, connection references are solution-aware components that contain a reference which associates the connector used and the flow it resides in. When importing a solution that contains a flow from one environment to the other, this prevents you from needed to open the flow and re-establish connections. In this case, we will focus on the Common Data Service(current environment) connection reference.
There a few important things I want to call out.
Connection references are specific to a user and connection when being automatically created or automatically used within a flow connection. Connections within a flow can be manually updated to use a connection reference from another user’s connection if they exist.
Manually, they can be created from the Power Apps maker portal. These are also specific to a user’s connection.
From within a solution, click New and select Connect Reference (preview)
You will give it a unique name and I recommend this be unique, since that is what currently displays in the views. You will also have to select a connector and an existing connection for that connector type. If it doesn’t exist, you must create it. (note: a connection for that connector must also exist for this user in the destination org)
That is pretty straight forward.
How is a connection reference automatically created? In this case, a user does not have an existing connection reference in an environment where they want to build a flow. If this user creates a flow in this environment using the Common Data Service (Current environment) trigger or action, a connection reference will automatically be created.
When it is created automatically, it will look like below, using a default name:
The only way you can currently find the schema name of this connection reference, is to click the ellipses from this view and click Edit. Then, you will see this as below:
Now, if there is more than one person creating flows that use Common Data Service (current environment) connectors in that same environment that also do not have an existing connection reference, the same thing will happen for that user.
This means, if the user does not already have a connection reference that they own for Common Data Service (current environment) and they create a new flow using a trigger or action for that connector, another connection reference will be created with the exact same name for this user.
However, the schema name is different.
If you extend this scenario to 10-15 people, or more, in an environment building flows, you will end up with multiple connection references that appear to be the same. This can become confusing very quick, especially during import of the solution.
If a user already has an existing connection reference in an environment, using Common Data Service (current environment) and they create a new flow, it will automatically set that connection reference to the connector in the flow and will not create a new connection reference.
As a best practice, and to prevent having multiple connection references using the same name and causing confusion, each user can create a connection reference manually with a preferred naming convention before creating any flows.
If you are already in this situation, users can change the display name of existing connection references or create a new connection reference with the preferred name and reassign this to the existing flows.
To update the connection reference once you have a new one created, open the flow in the originating org and update the connections to use the new connection reference. This will update the destination org when imported.
There is a known issue right now, if you create a new solution with a flow or add a flow to an existing solution the connection reference it is not added to the solution. It must be added manually. Be aware of this, especially if that connection reference doesn’t exist in the destination environment. It will fail to import. If you encounter this and have multiple connection references using the default name of Common Data Service (current environment), you will have to go find the schema name that aligns with the missing connection reference and add it to the solution in the originating environment.
While connection references are in preview, one connection reference can only be used within a maximum of 16 flows. If the same connection needs to be used in more than 16 flows, then create another connection reference with a connection to the same connector.
What happens when you this limit? You will a message like this when trying to save the flow (click image to expand)
What happens when User A creates a solution with a flow in one environment, using a connection and connection reference of User A, but User B imports this solution to the destination environment where User A does not have an existing connection.
User B, when importing the solution, will have the option to select their existing connection to tie to the connection reference or create a new connection. They could select below
or click + New Connection
Otherwise, User A would need to sign-in to the flow portal for the destination environment and create a connection. This would allow the user importing the solution to select the correct connection for the connection reference.
Thanks for reading!
Aaron Richards
Sr. Customer Engineer
Dynamics 365 and Power Platform
Do You Have REFERENCES?
The late Jim Gray once said that in the early days of SQL, “We had no idea what we were doing!” However, that is not completely true. What we were doing was mimicking the technologies that had gone before. The first SQL engines put each table in a separate physical file. We had file systems that had been in use for decades. We had lots of code for handling those files, in particular, all kinds of variations on index sequential access methods (ISAM). But data modeling introduced something we hadn’t had before: the concept of data integrity being enforced declaratively instead of procedurally.
In the dark ages of file systems, if we wanted to restrict a field in a record to particular values, then we had to have a program to enforce this rule. Actually, it was worse than that because we had to have every program enforce this rule if it made a modification in the file. The idea of having a general CHECK()
constraint on a column simply did not exist. COBOL gave us some display formatting on fields with the PICTURE
clause, but this had nothing to do with the relationships in the data.
Here’s a relatively straightforward example from the old days. You have an inventory file that shows all the goods that you sell and an orders file that shows who placed what orders. The integrity rule is pretty simple: you can’t sell anything that you don’t have in the inventory. You would go to the Orders file record, loop through the items that were ordered, which would be in a repeating group called the OCCURS
clause in COBOL and match them to the inventory. If you had the item in inventory, you would execute one procedure (in COBOL, this would be a PERFORM
paragraph statement). If you didn’t have items, you would execute a second procedure.
REFERENCES Clause
The <references specification> is the simplest version of a referential constraint definition:
<references specification> ::= [CONSTRAINT <constraint name>] REFERENCES <referenced table name>[(<reference column list>)] |
What this says is that the value in this column of the referencing table must appear somewhere in the referenced table’s columns which are named in the constraint. Notice the terms “referencing” and “referenced” are not the same as the “parent” and “child” terms used in network databases. Those terms were based on pointer chains that were traversed in one direction; that is, you cannot find a path back to the parent node from a child node in the network. Another difference is that the referencing and referenced tables can be the same table. There is also no such thing as a “link table” in RDBMS; that’s another network database term.
Furthermore, the referenced column must have a UNIQUE
constraint. A PRIMARY KEY
is a special case of a UNIQUE
constraint that also implied NOT
NULL
on all its columns. If the referenced columns are in a UNIQUE
constraint, then the target table must have one and only one NULL
in that column. The NULLs
will match in the referencing table. If no <reference column list> is given, then the PRIMARY
KEY
of the referenced table is assumed to be the target. There is no rule to prevent several columns from referencing the same target columns. For example, you might have a table of flight crews that has pilot and copilot columns that both reference a table of certified pilots. A table can also reference itself (this can get tricky and involves turning constraints on and off). A circular reference is a relationship in which one table references a second table, which in turn references the first table. The old gag about “you cannot get a job until you have experience, and you cannot get experience until you have a job!” is the classic version of this.
As a general design principle, it’s much more convenient to have a tree structured span of references. In particular, it makes referential actions much more predictable. Now I need to define “referential actions” and show how they work.
Referential Actions
The very first SQL engines behaved pretty much like procedural code language files. When TRIGGERs were added to the language, you could still do integrity checks in procedural code, but now it was in one place, the DDL, and not have to repeat it in every module of code. But people began to notice the same coding patterns were being used over and over in about 80% of these TRIGGERs. So, we added declarative subclauses for the most common situations. This means that the SQL engine can optimize these cases, which is not possible with triggers.
We decided that the REFERENCES
clause can have two sub-clauses that take actions when a “database event” changes the referenced table. The two database events are updates and deletes and the sub-clauses look like this:
<referential triggered action> ::= <update rule> [<delete rule>] | <delete rule> [<update rule>] <update rule> ::= ON UPDATE <referential action> <delete rule> ::= ON DELETE <referential action> <referential action> ::= CASCADE | SET NULL | SET DEFAULT | NO ACTION |
When the referenced table is changed, one of the referential actions is set in motion by the SQL engine.
1) The CASCADE
option will change the values in the referencing table to match the value (if any) in the referenced table. This is a very common programming technique that allows you to set up a single table as the trusted source for an identifier. This way, the system can propagate changes automatically.
The ON DELETE CASCADE
is probably the most common option. The reason is that in data modeling we talk about having “strong” and “weak” entities. A weak entity (such as the Order Details) can exist only if they have a reference back to a strong entity (Orders). You can build chains of weaker and weaker entity references to any depth and spread it out in a tree structure that begins at the strongest entity. Let’s use ← to mean “references” and look at the possible ways you can chain a strong entity, E1, and it’s two weaker entities, E2 and E3.
The difference can be subtle. Imagine that E1 is an order. In the first case, E2 might be order items like a back-to-school supply kit. This kit is made up of individual items (pencils, pens, crayons, paper, etc.) from E3. In this model, you can delete from or add individual items to a kit. Whatever you do, it’s still a back-to-school kit until you remove all the items.
In the second case, E2 might be an order item, and E3 could be delivery options. In theory, you could have an order, E1, that is empty and still deliver it. That doesn’t make much sense in the real world, but it is allowed by the data model.
2) The SET
NULL
option will change the values in the referencing table to a NULL
. Obviously, the referencing column needs to be NULL-able, but the referenced column does not.
3) The SET DEFAULT
option will change the values in the referencing table to the default value of that column. Obviously, the referencing column needs to have some DEFAULT
declared for it, but each referencing column can have its own default in its own table.
A little-known feature of SQL is the DEFAULT
VALUES
clause in the INSERT
INTO
statement where a single row is inserted containing only DEFAULT
values for every column. The syntax is: INSERT
INTO
<table name>
DEFAULT
VALUES
; as a shorthand for INSERT
INTO
<table name> VALUES (DEFAULT, DEFAULT,… DEFAULT)
.
4) The NO
ACTION
option explains itself. Nothing is changed in the referencing table, and a warning message about reference violation might be raised. If a REFERENCES
constraint does not specify any ON
UPDATE
or ON
DELETE
subclause, then NO
ACTION
is implicit.
Full ANSI/ISO Standard SQL has more options about how matching is done between the referenced and referencing tables. Full ANSI/ISO Standard SQL also has deferrable constraints. This lets the programmer turn a constraint off during a session so that the table can be put into a state that would otherwise be illegal. However, at the end of a session, all the constraints are enforced. Many SQL products have implemented these options, and they can be quite handy, but I will not mention them anymore. In SQL Server, you have to explicitly turn the constraints on and off with the statement. Please remember to give your constraints names, so this feature will be easy to use.
– Disable/enable all table constraints ALTER TABLE <table name> [NOCHECK | CHECK] CONSTRAINT ALL; – Disable/enable single constraint ALTER TABLE <table name> [NOCHECK | CHECK] CONSTRAINT <constraint name>; |
It is also possible to use system procedures that will enable or disable all the constraints in the entire database. I can’t give a good reason for wanting to do this, and it sounds likely to be very dangerous.
This weak and strong entity model is very simple. It may not look that way at first, but full E-R modeling can get more elaborate, and it’s tough to support in SQL.
POINTER Chains
I’m probably one of the few people who still remember WATCOM. It was a spinoff from the University of Waterloo in Canada. The University produces some of the best systems programmers I’ve ever worked with, but they could not build a useful human interface. They also produced an SQL compiler which was eventually sold to Sybase.
Their SQL product knew the difference between a referenced and referencing table. The referenced columns in the key were materialized (one way, one place, one time) then the referencing tables built pointer chains back to that occurrence. Basically, they took a lesson from the old network databases (IMS, IDMS, Total, etc.). This meant that no matter how big the key was, the references to it used a simple pointer. It also meant doing joins on primary and foreign keys is fast and cheap (we got really good at scanning pointer chains back in the old network days!). DRI (declarative referential integrity) actions to cascade the updates were also insanely fast; the system simply changed the reference and left the pointers alone.
Similar tricks can be done with SQL products that use hashing and columnar databases. This is one of the reasons that a REFERENCES
clause is actually more abstract and is, therefore, nothing like a link.
E-R Modeling
In 1976, Peter Chen introduced Entity-Relationship (E-R) modeling. Variations on his diagramming technique quickly appeared, differing mostly in the graphics. This is still an excellent tool for data modeling today, but it takes a little care to generate DDL from the diagrams.
The basic symbols are fairly simple. Entities are shown by rectangles, relationships among the entities are indicated by a diamond, and connecting lines between the diamonds and rectangles show the relationships. Some simple rules are that a relationship has to apply to one or more entities, that two or more entities in a relationship have to be connected,
and so forth. There are additional symbols to show what kind of relationship the entities have with each other.
Explaining this is probably easier to do with a simple example. Consider the relationship between authors and their books. The relationship is authorship, or you can just use the verb “write” to keep things simple.
A vertical line means one member of the entity set must be involved in the relationship. Think of the digit one. A circle on the connecting line means no members can be involved; think of a zero.
A “crow’s foot” is the symbol for “many,” which means zero or more. For example, this diagram says at least one, but possibly more authors are involved in the authorship relation. On the books side, there are some options. A circle – crows foot would mean zero or more books are written by the author or authors.
On the other hand, two vertical slashes mean that the author or authors have written precisely one book no more no less.
There have been a few experimental database products that implemented these notations, but SQL is so dominant they never really got anywhere.
WITH CHECK OPTION
A little-used feature in SQL can be used to fake constraints at this level. It is the WITH
CHECK
OPTION
clause on a VIEW that has existed since the SQL–89 standards. To explain this, consider the VIEW
CREATE VIEW V1 AS SELECT col1 FROM Foobar WHERE col1 =’A’; |
The view is updatable. This means that it applies to one and only one table that is capable of getting to one and only one row unambiguously. An update like this can be performed:
UPDATE V1 SET col1 =’B’; |
The update works just fine, but now rows which were previously returned by the VIEW disappear because they no longer meet the WHERE
clause condition. An INSERT
statement into the view could also put values into the base table whose rows don’t show up in the VIEW.
The WITH
CHECK
OPTION
makes the system look at the WHERE
clause in the VIEW definition. If an insertion or update fails the test, the SQL engine rejects the changes, and the VIEW remains the same. The full ANSI/ISO standard this feature is a more elaborate and includes cascade options.
To fake a constraint, you can use a relatively simple [NOT] EXISTS ()
constraints on a VIEW to create the conditions. For example, to assure that orders have at least one order item, you can create a VIEW on the join of Orders and Order_Details. This would mean that an order must match to one or more order details to show up in the Orders_2 view. Please note that the base tables are still in the schema. You have to make a decision to use only the VIEW and use DCL to prevent user access to those base tables.
CREATE VIEW Orders_2 AS SELECT O.order_nbr, .. FROM Orders AS O WHERE EXISTS (SELECT * FROM Order_Details AS D WHERE D.order_nbr = O.order_nbr) WITH CHECK OPTION; |
Now simply use Orders_2 in your queries. You can still use the base table, Orders. You might want to sit down and play with all the options that you can implement in such a VIEW.
Conclusion
Yes, putting in ER style constraints is a good bit of work for the programmer. But you need to ask yourself is it worth the effort. When do you really need data integrity? Is there any hope in the future for some help from SQL? The answer is yes, and we would find it when we get an implementation of the CREATE
ASSERTION
statement. This essentially is a CHECK()
constraint which applies to the schema as a whole, rather than to columns within a single table. This is why the constraint names are global rather than local.
Why PUMA References Reflect Poorly on HRC: A History Lesson
I’ve noticed a lot of HRC supporters refer to Sanders supporters as PUMAs.
Well, first, PUMA were nominal Clinton supporters in 2008. I say ‘nominal’ because the undisputed queen of the PUMAs, Darragh Murphy, only donated to McCain prior to that.
Another notable PUMA adds to the irony, Lynn Forrester, Lady de Rothschild, seen here calling a man who grew up on food stamps an elitist, herself the daughter of an aircraft tycoon who married a baron. (The video was uploaded by another notable PUMA, Larry Johnson, best known for the ‘whitey tape’ hoax, but it actually is AC360.)
What am I getting at? PUMA created the smears about Obama, PUMA nominally supported Clinton, but PUMA really supported McCain and thought Clinton was unelectable.
When you remind people of PUMA, what you’re telling me is “Hillary can’t win—vote Hillary!”
(As an aside, the 2008 campaign was when I fell out of love with Ms Clinton, due to the racially-charged nature of the campaign, and not just the PUMAs.)
This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.