Tag Archives: Control

How to Display Image Saved Using Pen Control on Web Client In Dynamics 365.

Microsoft Dynamics 365 comes with a feature called “Pen Control”, which allows the users to directly draw and add a signature in Dynamics 365 App. But there are some limitations. The user has the ability to draw and input a signature only on mobile client, not web client. Also, on mobile client a saved signature is shown as an image but on other clients, text is shown.

However, there is a workaround to display the saved signature as an image instead of the text on web client. In this blog, I will walk through the steps to achieve this.

I created a field called “Signature” with “Pen Control” added to it. I added “Signature” on the Contact main form. Then I created a new record and saved it with an input in “Signature” field. The following images show how the information is represented differently for newly created Contact record on mobile and web client for “Signature” field.

image thumb 8 How to Display Image Saved Using Pen Control on Web Client In Dynamics 365.

image thumb 7 How to Display Image Saved Using Pen Control on Web Client In Dynamics 365.

On the web client, the text shown in the “Signature” field is the source address of the image that was created through mobile client. That means, we can use this address in an iFrame to display the signature image on the web client. Here are the steps that we have to go through to accomplish this.

1. Add JavaScript code as a web resource. I have added mag_/js/contactsignature.js and following is the code that I used.

image thumb 6 How to Display Image Saved Using Pen Control on Web Client In Dynamics 365.

2. Create a new iFrame and add it to Contact main form. This is the same form which is being used to input the signature through mobile client. I named the iFrame, “IFRAME_Signs”. Populate URL field with “about:blank”, this will display an all white image similar to the mobile client, which implies the field is empty.

image thumb 5 How to Display Image Saved Using Pen Control on Web Client In Dynamics 365.

3. Then go the “Form Properties”.

image thumb 4 How to Display Image Saved Using Pen Control on Web Client In Dynamics 365.

4. Select “Events” tab, and then click on “Add”. Then, add “mag_/js/contactsignature.js” web resource.

image thumb 3 How to Display Image Saved Using Pen Control on Web Client In Dynamics 365.

5. Then expand “Event Handlers” tab, select “OnLoad” for “Event” field, and then click on “Add”. Then click on

image thumb 2 How to Display Image Saved Using Pen Control on Web Client In Dynamics 365.

6. Set handler properties as shown in the screenshot below. Then click on “Ok”. Be sure to save changes of form properties by clicking on “Ok”. Then save, and publish all changes.

image thumb 1 How to Display Image Saved Using Pen Control on Web Client In Dynamics 365.b

Let’s see our iFrame and logic in action. I opened the same Contact record which was displaying text in “Signature” field.

image thumb How to Display Image Saved Using Pen Control on Web Client In Dynamics 365.

As we can see in the screenshot above, “Signature” field contains the address of the image being displayed in “Contact Signature” iFrame.

To make this change more user friendly, you can hide “Contact Signature” iFrame on mobile clients, and “Signature” field on web client. But that’s something that I will leave up to you.

Let’s block ads! (Why?)

Magnetism Solutions Dynamics CRM Blog

Version Control: Don’t let your versioning get out of control

Version Control Don’t let your versioning get out of controll 351x200 Version Control: Don’t let your versioning get out of control

Losing control of changes in a file is the stuff of nightmares. You know the deal: You craftily create copy, send it around for review amongst your peers, and then everyone starts sending back separate files with their comments. If you accept their feedback, you start implementing the changes in one of the documents – your ‘master’ – as you continue to bring in additional comments and edits from other people. But if the feedback loop is long and you’re at all distracted, it can be terrifyingly easy to lose track of which document is the master.

I’m sure you’ve lived the nightmare of overwriting important changes or renaming a file incorrectly, only to realize the mistake many revision rounds later. I have, and it’s crazy-making. Please see my other feedback/edits, says a colleague. I swore I made those changes, I think.

Worse yet: The thing is printed or published before you catch it; you only realize later that it was a non-final version. Horror.

Version control – or lack thereof – can take down any well-meaning marketing organization. Losing track of what round you’re on, which is the working version, and where changes went, is maddening, frustrating, and inefficient.

So how to wrest control of this beast? Let’s talk about how to keep track of your versions … where to store them, how to name files, how to follow changes, etc.

Establish file-naming conventions

One simple but tremendously effective first step is to establish file-naming conventions across your organization.

If you take away nothing else from this article, heed this: Establish a simple but consistent naming system for your files and stick to it. This may include a combination of numerals or initials or dates. And you must use it militantly, with each and every revision.

Personally, I use different methods depending on how many people are involved in a project. If it’s just me working on a file, like the draft of this blog, I keep a simple file-naming strategy. I label everything with version number only: “v1,” “v2,” etc.

But once more people get involved, I get more granular. When multiple people are reviewing, I find it handy to append a file name with initials so I can easily see whose review I’m working with or scan my desktop to see who’s already given input. I also like to add a date to the file name, such as FILENAME_060717. (True, most word processing programs provide a revision history in the file details – but that doesn’t stick if you later re-open a file and make a tiny change. The clock resets.) And when working with extensive, fast-paced edits – inputting multiple changes in a day – I like to timestamp the actual file name with “morning revision,” “afternoon revision,” or even “2pm revision,” etc., to keep it all straight.

Set up a clear creative workflow

Another step to gaining control of the beast is to set a strategy of who reviews what, and at what stage. Having some kind of plan that is documented – fancy or intricate – can help you keep control of files. These are the hallmarks of a creative workflow, which is another process I advocate. In fact, you may be able to marry the two into one process, an efficient fell swoop. But again, like the naming convention, you really need to stick to the plan and enforce it.

Get everyone on the same page

So you have a file naming system and a plan. That’s great – for you. But if you need to loop others into that plan, it’s critical to get everyone working on the same proverbial page. This means implementing rules and process, not just for yourself, but for the whole organization.

Establish an efficient cross-functional workflow and get someone to wear the Type A hat. This person can document the process, send instructions to colleagues about how to name and label their files, hold mini training sessions, and spot check to ensure and enforce that the process is adhered to. Doing the stickler-for-details routine is not a lot of fun, but it will save time in the long-run.

A “Word” on track changes

There’s nothing that sparks ire in me like receiving a clean file from a contributor. I think, “Hey, wow – they had no changes” … only to start reading through and realize, with horror, that they have made legions of edits without notating them. It’s when I’m in the weeds, reading line by line to compare the old and the new to catch changes, that smoke comes out of my ears.

Change-tracking tools, like the one in Word or Google Docs, provide an easy way to annotate, edit, and mark up. This helps me quickly see who made what change and when.

I don’t just track changes when working with others. I turn the function on while doing my own work to keep tabs on what I’m doing. I love that the program keeps a record of my changes and easily enables me to stet and revert if my edits aren’t so stellar after all.

A cousin of track changes is the comments tool, which I equally love. I use comments to remind myself of status, things I need to come back and complete, or thought process. I also use comments to explain to others why I’m making certain wording changes, why I rewrote a passage for tone, and so on.

I know what you’ve heard in the past about track changes – and it’s true. It’s a visually overwhelming beast to wrangle. But the programs have evolved in a way that makes this manageable. Now it’s possible to mark changes but hide them as you go, so your page stays clean and easy to read and you can focus on just the flow and look of the copy.

Caution: Don’t stamp it “final” until it really is

There’s something magical – and not in a good way – that seems to happen when I append a file name with the word “final.” It’s like a cue to Murphy’s Law: The moment I proclaim a file to be finished, inevitably a host of edits and further changes trickle in.

To trick myself (and Murphy), I’ve learned to not use the word “final” on my files until the thing has shipped, launched, and left the proverbial building. Instead, I keep on keeping on with the naming conventions above so I know which version is the latest. Only later, once all is done, do I come back and make a copy of the One That Went To Press and rename it “Final.”

How to get to “final” stage? Provide a window for internal client review. And then, when it’s closed, it’s closed. Unless it’s a legal issue, ship when you say and stick to it. Your clients will fall in line.

Create an archive

Archiving – on your desktop or a cloud drive – is another great way to keep track of versions.

As I work on multiple files, I stash them in a folder on my desktop called “Previous Versions.” This way, when I open my main working folder, I only see the absolute latest. It helps keep me from being overwhelmed or from scanning through and grabbing the wrong file. It also makes for a nice archive. When I work this way, I create a folder that captures all of the edits and revisions along the way. So, if I ever need to revert to a previous iteration, it’s easy to find.

When all is said and done, I may have three or 300 versions of revised files. Much like tax records, I like to hold on to these for a spell … at least until my files are out the door. This way, I have a clear archive of all changes that have been made throughout the revision process – something to look back to should I need to revert to a previous iteration, see where a strange change or unfamiliar edit was introduced, etc.

I also love working with cloud drives – like Google Drive or SharePoint or Dropbox. In fact, I almost exclusively work on files out of these systems these days. That way I know I’m always working on the latest version of a file, and I can do my work from any computer, versus having to email myself copies of the files. Multiple people can contribute, edit, or revise simultaneously. And it keeps an automatic archive.

Parting thoughts: Be alert when making your edits

One last word of advice: As much as possible, try to limit your distractions when working on files and changes. Close the door, drink some coffee, and turn the music off (or on, if that helps you focus). There’s nothing worse than making a host of excellent changes, only to realize later that you made them to the wrong version of a file and need to redo your work. Or, you’ve saved the changes in a strange place and the file has now gone missing. So pay attention.

And that’s final.

What is your favorite way to keep version control? Or what file-related horror story do you have to share?

Let’s block ads! (Why?)

Act-On Blog

Shewhart Control Charts and Trend Charts with Limits Lines in TIBCO Spotfire

Shewhart control charts are popular charts commonly used in statistical quality control for monitoring data from a business or industrial process. The goal of a statistical quality control program is to monitor, control, and reduce process variability. These charts often have three lines—a central line along with upper and lower control limits that are statistically derived. They enable the user to monitor a process for shifts, relative to a baseline historical period, that alter the location or variability of the measured statistic. There are a number of different types of charts, each with their own formula for calculating control limits and methods of applying rules to determine whether the process is in or out of control.

One common set of control charts consists of a pair of charts:

1. The individual chart which displays the individual measured values

2. The moving range chart which monitors the process variability.

Uses of control charts

—Monitor a process for special causes of variation that can occur. For example, a flood alarm that monitors water level.

—Control the location and variability of a process metric and not allow more process variation to occur than was present when the control limits were set. Often, a process capability study is performed prior to setting control limits, to ensure that the process is capable of performing within the specification limits. Specification limits define the region within which the metric must remain for proper functioning of the process or product.

—Drive continuous process improvement. Control charts identify out of control points, whose causes are identified and eliminated. Limits are then recalculated and tightened and the process is repeated.

Popular types of control charts

Run Chart
x̅ and S Chart
x̅ and R Chart
Individual and Moving Range Charts
p-, np-, c-, and u-charts
UWMA and EWMA Charts
CUSUM Charts
Levey-Jennings
Multivariate Control Charts

How to create control and trend charts with limits lines using Spotfire

Creating lines with lines and curves property

1. Control limits or specification limits may have predetermined values which can be set using the fixed value line option.

2. Predefined aggregated values can be used for creating lines like upper outer fence. The upper outer fence (UOF) is defined as the threshold located at Q3 + (3*IQR) where Q3 is third quartile and IQR stands for interquartile range

Control lines 1 Shewhart Control Charts and Trend Charts with Limits Lines in TIBCO Spotfire
3. Property values can be used to specify dynamic control lines where it can be changed by a user, a script running in background, or a data function. Properties updates can be triggered by a user-friendly interface like selecting Sigma level and metrics.

ControlLine2 Shewhart Control Charts and Trend Charts with Limits Lines in TIBCO Spotfire
4. Custom expressions, which can be easily modified, help create specific calculations for a control line. They can be as simple as Avg([Y]) + 3.0*StdDev([Y]). It can also be combined with properties.

Calculation-based lines

Sometimes, lines can a be complex equation: Y(Control Line) = C2 + (D/p) * cos [(p/D) * X + C1]

In this case, C1 and C2 are constants which can be properties in Spotfire, and D—which is drag in this equation—can be a column. Spotfire Math functions can be used to determine the cosine of the argument. The weight per length of line p can be another calculated column. In Spotfire, an expression may look like this where $ symbol indicates properties:

$ {RunYieldsTarget} +([Metric5]/[Metric6])*Cos([Metric5]/[Metric6]*[Metric1]+$ {Rpk.calculated})

Moving range chart

In order to create moving ranges, Spotfire LastPeriods OVER function is very useful. It includes the current node and the n – 1 previous node, which can be used to calculate moving averages.

Avg([Metric5]) OVER (LastNode([Axis.X,n]))/n

This function calculates n period average where n is an integer. If X-axis is defined as month, it will provide three month rolling average.

Control lines from another batch or process

Sometimes control lines can be from another golden batch or process.

Curve from another data table allows users to specify a custom curve expression, which makes use of parameters available in a specified data table or golden batch.

Line from column value can display lines based on X and Y coordinates that already exist in two columns of your analysis. For example, coordinate values could be calculated from the input data using a statistical calculation from a calculated column or even a data function, and the output result could be presented as coordinate values for a curve.

All these building blocks can be combined nested and morphed into a beautiful dashboard.

Process Control1 Shewhart Control Charts and Trend Charts with Limits Lines in TIBCO Spotfire

Try out Spotfire for yourself and see how easy it is to create insightful and beautiful dashboards from your data. Check out other Tips and Tricks blog posts to learn more.

Let’s block ads! (Why?)

The TIBCO Blog

Police Riot Control Vehicle

 Police Riot Control Vehicle

Mobile little fortress.

Antifa’s worst nightmare

March 21, 2017

Core Database Source Control Concepts

This article takes a conceptual approach to explaining core source control concepts, using diagrams and anecdotes as necessary to describe how all the pieces fit together.

  • Why source control – the basic purpose of source control and the benefits it offers, including:

    • maintaining an “audit trail” of what changed, who made the change, and when.
    • allowing team collaboration during development projects.
  • Basic source control components and concepts, including:
    • the repository, which stores the files and maintains their histories.
    • the working folder, which provides an isolated environment (sandbox) for creating, editing, and deleting files without affecting those in the repository.
    • workflow concepts, such as versioning, branching and merging files.

Why Source Control?

A source control solution provides users with the tools they need to manage, share, and preserve their electronic files. It does so in a manner that helps minimize the potential for conflicting changes and data loss, in other words one user inadvertently overwriting another user’s changes when multiple users work on the same files. If one user changes the name of a column while another one updates its data type, the source control system will alert us to the conflict and allow the team to decide the correct outcome, via a merge operation.

Critically, a source control solution maintains a version history of the changes made to every file, over time, and provides a means for users to explore those changes and compare different file versions. This is why we often refer to a source control system as a version control system, or VCS for short.

These days, few software developers would consider building an application without the benefits that a VCS offers, but its adoption for databases has been slower. This is mainly because of the nature of a database, which must preserve its “state” (i.e. business data) between database versions. It means that having, in source control, the files that define the schema objects and other code is not the whole story. When upgrading a “live” database to a newer version, as it exists in the VCS, we can’t just tear down objects that store data and re-create them each time.

Nevertheless, despite this added complication, there is no reason why we should exclude databases from our source control practices. In fact, a VCS can be one of a database developer’s most valuable tools and the foundation stone for an effective and comprehensive change management strategy.

At its heart, the purpose of a VCS is to maintain a change history of our files. As soon as we enter a new file into source control, the system assigns it a version. Each time we commit a change to that file, the version increments, and we have access to the current version and all previous versions of the file. This versioning mechanism serves two core purposes:

  • Change auditing

    • Compare versions, find out exactly what changed between one version and another.
    • Find out who made the change and when; for example, find out when someone introduced a bug, and fix it quickly.
  • Team collaboration
    • Inspect the source control repository to find out what other team members have recently changed.
    • Share recent changes.
    • Coordinate work to minimize the potential for conflicting changes being made to the same file.
    • Resolve such conflicts when they occur (a process called merging).

By maintaining every version of a file, we can access the file as it existed at any revision in the repository, or we can roll back to a previous version. Source control systems also allow us to copy a particular file version (or set of files) and use and amend them for a different purpose, without affecting the original files. This is referred to as creating a branch, or fork.

I hope this gives you a sense of the benefits a source control system offers. We’ll look at more as we progress. The following sections paint a broad picture of the source control components and workflow that enable this functionality. We review the most important concepts in terms of the content creation, the storage, and the tracking strategies they enable, but we won’t go into too much detail.

The Source Control Repository

At the heart of a VCS is the repository, which stores and manages the files, and preserves file change histories.

Centralized source control systems support a single, central repository that sits on a server, and all approved users access it over a network. In distributed source control systems, each user has a private, local repository, as well as (optionally) a “master” repository, accessed by all users. We assume a centralized model for the conceptual examples in this article.

Regardless of the model used, when we add files to the repository, those files become part of a system that tracks who has worked on the file, what changes they made, and any other metadata necessary to identify and manage the file.

Repository storage mechanisms

The exact storage mechanism varies by source control system. Some products store both the repository’s content and its metadata in a database; some store all content and metadata in files (with the metadata often stored in hidden files); other products take a hybrid approach and store the metadata in a database, and the content within files.

The repository organizes files and the metadata associated with each file, in a way that mirrors the operating system’s folder hierarchies. In essence, the structure of files and folders in a source control system is the same as in a typical file management system such as Windows Explorer or Mac OS Finder. In fact, some source control systems leverage the local file management system in order to present the data in the repository. Figure 1 shows the server repository structure for a BookstoreProject repository, with the Databases folder expanded to reveal a Bookstore database. Don’t worry about the details of this structure yet, as we’ll get to them later.

word image 52 Core Database Source Control Concepts

Figure 1: Typical hierarchical folder structure in the repository.

What sets a source control repository apart from other file storage systems is its ability to maintain file histories. Everything we save to a repository is there forever, at least in theory. From the point that we first add files to the repository, the system maintains every version of every file, recording every change to those files, as well as to the folders that form the repository structure.

Source control of non-text files

Ideally, a VCS manages and tracks the changes made by all contributors to every type of file in the system, whether that file is a Word document, Excel spreadsheet, C# source code, or database script file. In reality, however, traditional source control solutions usually track changes only on text files, such as those used in application and database development, and tend to treat binary files, such as Word or Photoshop files, as second-class citizens. Even so, most solutions maintain the integrity of all files and help manage processes such as access control and file backup.

The Working Folder

Most users care less about how their source control system stores the file content and metadata, and more about being able to access and work on those files. Each repository includes a mechanism for maintaining the integrity of the files within their assigned folder structure and for making those files accessible to authorized users.

However, to be able to edit those files, each user needs a “private workspace,” a place on his or her local system, separate from the repository, to add, modify, or delete files, without affecting the integrity of the files preserved in the repository.

Most VCSs implement this private workspace through the working folder. The working folder is simply a folder, and set of subfolders, on the client computer’s file system, registered to the source control repository and structured identically to the folders in the repository. Figure 2 shows the working folder structure for a user of the BookstoreProject repository. This user has copied the entire repository to a working folder called BookstoreProject.

word image 53 Core Database Source Control Concepts

Figure 2: Typical working folder structure.

Each user stores in their working folder copies of some or all of the files in the repository, along with the metadata necessary for the files to participate in the source control system. As noted above, that metadata is often stored in hidden files.

We can update our working folder with the latest version of the files stored in the repository, as illustrated in Figure 3.

 Core Database Source Control Concepts

Figure 3: Copying files from the repository to the working folder.

We can edit the files in the working folder as necessary and then, eventually, commit the edited versions back to the repository. This process of “synchronizing” the working folder with the repository, i.e. updating the local working folder with any changes in the repository and committing any local changes to the repository, works differently from product to product and depends on whether the repository is centralized or distributed.

Regardless of repository type or product, the source control system always keeps the files in the repository separate from changes made to files in the working folder, until the user chooses to commit those changes to the repository.

Versioning and Collaborative Editing

The exact architecture and mechanisms that underpin versioning and collaborative editing vary by VCS, but the basic principles are constant. A user can obtain a local “working copy” of any file in the repository, make additions and amendments to that file, and then commit those changes back to the main repository. At that point, other users can request from the repository the amended version of the file or any of the previous versions. The VCS maintains a full change history for each file, so we can work out exactly what changed from one version to another.

In this section we’ll discuss, at a high level, how the repository maintains these file versions, as users make progressive changes to those files. Notionally, this versioning process is easy in a single-user system. A user updates his or her working folder with the latest versions of a set of files, and then edits those files as appropriate in a suitable client, such as Notepad for a plain text file, Visual Studio for an application file, or SQL Server Management Studio for a database file. The user then commits the changes to the repository, creating new versions of the files.

However, another key function of source control is to enable a group of people to work collaboratively on the set of files that comprise a development project. In other words, the VCS must allow multiple users to modify a file concurrently, while minimizing the potential for conflicts and data loss. Let’s see how source control systems allow for and manage these concurrent changes.

How versioning works

Let’s assume we’ve created a project directory in source control for an Animal-Vegetable-Mineral (AVM) application and that we’ve established a working folder for this project.

Figure 4 depicts the progressive changes to the application over three revisions. Notice that the repository preserves all the changes to the files, with each version assigned a revision number. Note that Figure 4 is not in any way a depiction of how a VCS maintains different file versions internally. It is merely to help visualize the process of how it can allow us to access different file versions, and provide a history of changes to our files over time.

 Core Database Source Control Concepts

Figure 4: Working with files from the AVM project in source control.

In Revision 1, we committed to the repository (from our working folder) two new files, Animals.txt and Vegetables.txt. Revision 1 represents the first and latest file versions in the AVM repository.

We edited Animals.txt to replace skunk with elk, and created a third file called Minerals.txt, and committed the changes to the repository. Collectively, these changes form Revision 2. Vegetables.txt remains unchanged from Revision 1.

Next, we edited Vegetables.txt, changing sprouts to carrots, and edited Minerals.txt, changing potash to pyrite and adding silica. These changes form Revision 3, with Animals.txt unaltered from Revision 2.

Mostly, users are interested in working with the latest folder and file versions in the repository, but we can also request to see the repository as it existed at any earlier revision, with each file exactly as it existed at that point in time. For example, if we were to pull Revision 2, we would get the Revision 2 copies of the Animals.txt and Minerals.txt files, as well as the Revision 1 version of the Vegetables.txt file. In this way, we can build and deploy a specific “version” of the application or database.

Conceptual versus actual source control implementation

In this article, the descriptions of the versioning process are conceptual in nature. An actual source control implementation will vary according to product. It might not store this many different file versions, or it might build a given version by storing a record of the differences (the delta) between a current version and the previous version.

Since developers usually want to ensure they are working with all the most recent versions of their project files, they update their working folders regularly to get the latest versions. When they view their working folders, they are viewing the latest version of the repository and all its folders and files, at the point in time they did their last update.

However, we can also request to view the revision history for the repository (for example, by accessing the repository’s log). Exactly what we see when viewing the revision history depends on the VCS, but it’s likely to include information such as the revision number, the action, who made the changes, and when, and the author’s comment, i.e. a description of the change. Figure 5 shows what the log might look like for the AVM project folder, after the sequence of changes (and assuming two users, Fred and Barb).

 Core Database Source Control Concepts

Figure 5: Storing files in the repository of a source control system.

A VCS, as noted earlier, often stores the differences between each file version, rather than the full file for every version. We refer to each set of differences as the delta. If we request to view a file as it existed at a particular revision, the VCS might, for example, retrieve the last stored complete file and then apply the deltas in the correct order going forward.

Likewise, a VCS will usually provide an easy visual way for users to see a list of what changed between any two revisions in the repository. We call this performing a diff, short for “difference between revisions”.

Collaborative editing

Any source control solution must provide the structure necessary to permit collaboration on different file versions in the repository, while preserving every version of each file.

The potential difficulty arises when more than one user works on the same version of a file at the same time. Let’s say both Fred and Barb have in their respective working folders Revision 2 of our AVM app. Fred edits Vegetables.txt, changing sprouts to carrots and commits the change. At roughly the same time, Barb edits the same file, changing sprouts to peas, and commits the change. What should be the result in the source control repository? If the “last commit wins,” we’d simply lose Fred’s changes from the current version of the file.

Older source control systems (often referred to as “first generation”) get around such difficulties by imposing an isolated editing (or locking) model, whereby only one user at a time can work on a particular version of a file.

Most modern source control systems enable a group of users to work collaboratively on the same version of a file. Referred to as a concurrent editing model, this process allows them to reconcile, or merge, the changes made by more than one user to the same file and, in the process, resolve any conflicting changes.

Isolated editing

A traditional “first-generation” source control solution, such as Source Code Control System (SCCS), developed at Bell Labs in 1972, uses a central repository and a locking model that permits only one person at a time to work on a file.

To use a database metaphor, we can liken the isolated editing model to SQL Server’s pessimistic concurrency model. It assumes, pessimistically, that a conflict is likely if multiple users are “competing” to modify the same file, so it takes locks to prevent it happening.

A typical workflow in source control might look as follows:

  1. Fred performs a check-out of the latest version of the Animals.txt file, in this case, Revision 1.

    1. If the file does not exist in Fred’s working folder, the source control system will copy it over.
    2. The source control system “locks” the file in the repository. (The exact “locking” mechanism varies by system.)
  2. Fred edits the file in his editor of choice. He deletes skunk and adds elk.
  3. Fred saves the changes to his local working folder.
  4. Barb attempts to check out Animals.txt, but cannot because Fred has it locked. Although Fred saved his changes locally, he has yet to perform a check-in to the repository and so the file remains locked and no one else can check it out.
  5. However, Barb can download Animals.txt to her working folder as a read-only copy, so she can at least see the latest version as it exists in the repository.

Figure 6 provides a pictorial overview of this process, to this point.

 Core Database Source Control Concepts

Figure 6: Allowing only one user at a time to modify a file.

From this point, the workflow might proceed as follows:

  1. Fred performs a check-in, which copies the updated file from his working folder into the repository.

    1. The repository stores Revision 2 of Animals.txt, while retaining Revision 1.
    2. The source control system releases any locks, so both versions of the file are available for checkout.
  2. Barb can now work freely on Animals.txt. She can:
    1. View Revision 2, with an elk instead of a raccoon, by re-syncing her working folder.
    2. Compare the two versions to determine what changed.
    3. Check out the file for editing. The source control system will automatically copy the latest version, Revision 2, to her working folder and lock the file in the repository.

Remember that source control terminology can vary a lot depending on the system. For example, some systems refer to the check-out operation as a “get,” and the check-in operation as an “add,” a “delta,” or a “commit.”

An important point in all this is that the repository only increments the revision number in response to the check-in operation. A user can check out a file, edit it and save it to his working folder but then decide to “revert” to the original version of the file. The source control system will release the file, and the revision number will remain unchanged. The user’s local changes are lost, unless he or she saved them elsewhere.

Concurrent editing

If working on the files in isolation is the safest route to controlling changes and avoiding conflicts, it’s also the slowest. For a more efficient workflow, modern source control systems allow for the possibility that two or more users will edit the same file at the same time. The notion of a check-out operation, with its attendant locking, disappears and instead each user performs an “update” to refresh his or her working folder with the most recent copies of the repository files. Users can then work on these files concurrently, then commit their changes to the repository.

If isolated editing is akin to SQL Server’s pessimistic concurrency model, then the concurrent editing model is more like SQL Server’s optimistic concurrency model. It hopes, optimistically, that no other user will “interfere” with a file on which another user is working, but it has to deal with the consequences if it happens.

Let’s see how this might alter our typical workflow. Our description of the various processes uses the most common terms associated with centralized version control systems, with the usual proviso that you will see differences even among centralized systems, and certainly for distributed systems where these processes work slightly differently.

  1. Fred performs an update to retrieve the latest files. In this case, he now has in his working folder Revision 2 of Animals.txt.
  2. Barb does likewise.
  3. Fred edits his working copy of the file by adding wolf.
  4. Barb edits her working copy of the file by adding fox.
  5. Fred saves the changed file to his working folder.
  6. Fred commits the changed file to the repository. The updated file becomes Revision 3 in the repository.
  7. Barb saves her edited copy to her working folder.
  8. Barb tries to commit to the repository, but the repository detects that the version of the file has changed since Barb’s last update. The commit fails.
  9. Barb performs an update, retrieving Revision 3 into her working folder, and must now “merge” the changes in her working copy of the file with those in the Revision 3 copy of the file. This merge process might be automated, manual, or a combination of both.
  10. Barb commits the merged file to the repository. The updated file is designated as Revision 4.

Figure 7 provides an overview of this process.

 Core Database Source Control Concepts

Figure 7: Concurrent editing of the Animals.txt file.

In this simple example, Fred and Barb each make changes that do not really conflict; the wolf and the fox can easily co-exist, at least within the confines of a text file. In such cases, the source control system will probably perform this sort of merge automatically, but in this case even merging the documents manually is a relatively painless process whereby Revision 4 simply contains both the users’ changes, as shown in Figure 8.

 Core Database Source Control Concepts

Figure 8: Merging two versions of a file to create a third version.

Again, this may not be exactly how a VCS implements a merge, but it gives a good idea of how it works conceptually. Some users don’t even consider this a merge operation, since it can occur automatically as part of the update operation. Some merges, however, are not quite so straightforward.

Dealing with conflicts during concurrent edits

A VCS can sometimes auto-merge changes made by different users to the same file. Sometimes, however, concurrent changes to the same version of a file will cause a real conflict, and to resolve it one of the users will need to perform a manual merge operation, within his or her working folder.

Let’s rewind to the stage of our Animals.txt example, where each user was working with Revision 2. Suppose that, in addition to adding wolf to his file, Fred changed bear to black bear, and committed the changes (creating Revision 3). At the same time, in addition to adding fox to her file, let’s assume Barb changed bear to brown bear. Now when Barb tries to commit her file, an actual conflict emerges, one that Barb must resolve, as shown in Figure 9.

 Core Database Source Control Concepts

Figure 9: Addressing conflicts when trying to merge files.

The source control system can’t merge the two file versions until Barb resolves the conflict between black bear and brown bear (the additions of wolf and fox still cause no problem).

When conflicts of this nature arise, someone must examine the comparison and determine which version of bear should win out. In this case, Barb decides to go with black bear.

It’s worth considering the risk associated with this merge process. Barb’s commit fails, so she can’t save her changes to the repository until she can successfully perform a merge. If something goes wrong with the merge operation, she risks losing her changes entirely. This might be a minor problem for small textual changes like these, but a big problem if she’s trying to merge in substantial and complex changes to application logic. This is why the source control mantra is: commit small changes often.

Merging in distributed source control systems

In distributed source control systems, each user’s client hosts a local repository as well as a working folder. In the distributed model, Barb commits to her local repository, saving her changes, and then pushes the updated file to the central repository, which would fail because Fred got there first. We won’t go into detail on what comes next, but it means we’d see different file versions and version numbers in the central repository from what we saw in our “centralized” example. The main point at this stage is simply that merging is “safer” on distributed source control systems because users can always commit their local changes to their local repository.

Another wise practice is: update often. The earlier and more often we update our working folders and commit our changes, the better the processes work for everyone and the easier it is to merge files and resolve conflicts.

Branching

The way in which we organize files within a repository depends on the team’s preferences, the project, and on the source control system itself. One common approach is to organize files by project, with each set of project files, whether related to database development, application development or something else, stored within the main project folder.

We organize the project folder itself according to the standards set for the organization. In many cases, we will store the files related to the main development effort in a common root subfolder, often named trunk, but sometimes referred to by other names, such as master. We then create subfolders in the trunk folder as required; for example, as we saw earlier in Figure 1, a subfolder for database development and another for application development, and so on.

However, relying exclusively on the trunk folder assumes that everyone on the team is working simultaneously on the project files associated with the main development effort. In reality, it is common that certain team members will wish to “spin off” from the main effort to work on separate, but related projects, often referred to as branches. For this reason, you’ll find that the main project folder will also contain a branches folder (or a folder with a comparable name), in addition to trunk.

Let’s assume that we’ve created a project folder in our repository for our AVM project, added the trunk and branches folders, and built our data-driven application. At this point, all our files are stored in the trunk folder.

At Revision 100, the team releases the latest version of the application to customers as v1.0. They’re now ready to begin developing v2.0. At the same time, customers are submitting feedback and bug reports for v1.0. This is a classic example in software development of where it will be useful to create a branch (of course, various other branching strategies are used too). We create a branch at a particular revision in the repository. When we do this, the repository creates a new path location, identified by whatever we call the branch.

In this case, we’ll create a branch at Revision 100 and call it 1.0_bug fixes. From the user’s perspective, this process creates a separate 1.0_bug fixes subfolder in the branches folder, at Revision 101, and populates it with project files that point back to the Revision 100 files in the trunk folder. Developers can work on the branch files as they would in the trunk, but their efforts remain independent of the trunk development efforts, while preserving the fact that each set of files shares the same roots.

Tags and build numbers

Closely related to the concept of a branch is a tag. In fact, a tag is virtually identical to a branch in that it is a named location for a set of files in the repository, created at a particular revision. We can think of creating a tag as a way to name a set or subset of files associated with a particular revision. The big difference is that we don’t ever modify tags. They represent a stable release of a product, with the tag usually representing some meaningful build number. In our example, we might create a tag at Revision 100, called “v1.0.”

While part of the team works on the 1.0 bug fixes in the branch, the rest of the team continues the 2.0 development in the trunk. Those assigned to the bug fixes can work with the trunk folders and files as they would in the trunk. They can edit the files in their working folders and commit changes to the repository, which will maintain revision histories for the life of each branch file, starting from the point of branch creation.

For example, let’s say Barb creates a branch of the AVM project, containing all the project files, called 1.0_bug fixes, in preparation for the first maintenance release (v1.1). This means that the branch folder is at Revision 101, and contains, among other files, the latest revision (let’s say Revision 100) of Animals.txt, as it existed in the trunk at the time she created the branch. Meanwhile, Fred is working in the trunk in preparation for the release of v2.0.

Fred updates his working folder with the latest file versions in the trunk. He retrieves Revision 100 of the Animals.txt file, adds chipmunk and walrus and deletes fox. He saves the file and commits his changes to the repository. This creates Revision 102 in the repository.

Barb updates her working folder with the latest file versions in the branch. She retrieves the latest version of the Animals.txt file in the branch, which is still Revision 100; Fred’s commit has no effect on the file versions in the branch. Barb changes black bear to bear and commits the change, creating Revision 103 in the repository, as shown in Figure 10.

 Core Database Source Control Concepts

Figure 10: Creating a branch for the Animals.txt file.

As you can see, at this point, we have two simultaneous but separate development efforts. Of course, at some point the team may want to merge changes made in the 1.0_bug fixes branch back into the trunk so the next full product release can benefit from the maintenance fixes.

Merging

Previously, we discussed the need to perform a merge operation to resolve conflicting changes to the same version of a file in the repository. Similarly, when we create a branch, we’ll at some point want to merge the changes made to the branch into the trunk. Conversely, we will also need to merge changes from the trunk into a branch. For example, this might be necessary so that a long-running “feature branch” can catch up with the changes in the trunk. As you can imagine, a merge in either direction could be a complex process, especially if conflicts arise across many project files.

The type of comparison made when merging between branches is similar to merging files during normal concurrent editing. In fact, within a particular source control system, the two processes might seem nearly identical. Conceptually, however, they are somewhat different in intent. When merging during concurrent editing of the same file, we’re trying to resolve all conflicts to produce a copy of the file that represents a single source of truth. When merging a branch to the trunk, or vice versa, we’re retaining two sources of the truth and we’re interested only in incorporating the changes from one into the other.

For example, suppose Barb wishes to merge into the trunk the changes she made in the branch. Barb’s latest commit to the repository was Revision 103 (+bear, −black bear), affecting the Animals.txt file in the branch. Meanwhile, Fred’s latest commit to the repository was Revision 102 (+chipmunk, +walrus, −fox), affecting the Animals.txt file in trunk. The resulting merge operation will produce a new version of the Animals.txt file in the trunk (Revision 104), while the branch version will remain untouched at Revision 103.

Let’s consider each of these changes in turn, in the context of the merge operation.

 Core Database Source Control Concepts

Figure 11: Resolving conflicts in the Animals.txt file.

Fred’s additions of chipmunk and walrus present no problem because they affect only the file slated for v2.0 (the trunk) and have nothing to do with v1.1 (the branch). In other words, Revision 104 will still contain the chipmunk and walrus entries.

The next potential conflict is with the fox listing. Fred removed fox from the file in trunk, but Barb did not remove it from the version in her branch. You might think that in merging from branch to trunk, we’d add fox back in to the trunk version. However, in effect, this would simply overwrite Fred’s change and any effective source control solution will recognize the potential for this sort of problem and avoid it. It may help to think of the merge in terms of the branch change set, in this case all the changes in the current branch revision compared to the revision on creating the branch. Barb wishes to apply this change set (−black bear, +bear) to the trunk file. This means, in SVN at least, that it will retain all other entries as they currently exist in trunk, and so fox will still not exist in the merged version in the trunk. Another VCS might mark this as a possible conflict, and the fact that the branch version assumes that the fox entry exists, but it doesn’t exist in the trunk, is a potential problem.. Ideally, before performing the merge, Barb would have updated her working copy of trunk and merged from trunk to branch, or at least inspected the differences and raised any possible issues with the team before performing her merge in the opposite direction.

Merging works differently in different source control systems

The methods used to perform the branch comparisons and merges vary from product to product, as does the amount of manual input involved to ensure that the merge does not do more damage than good. Each product is different and each configuration within a product can vary.

Finally, we have the potential conflict between bear and black bear. Just because Barb changed it in the trunk does not automatically mean it should be changed in the new release. Even so, SVN does not treat this as a conflict, though another VCS might. Only if Fred had also changed the black bear listing would SVN treat it as a conflict. The result is that the merged file in the trunk will contain the entry bear. Again, the onus is on Barb to ensure that this change to the maintenance release will cause no problems when applied to v2.0.

Branching usually comes off without a hitch; we point to the files and folders in the trunk that we want to branch and within seconds, off we go, with a new branch. Merging is the tricky part. The risks and overhead associated with merging can sometimes keep organizations from taking advantage of these capabilities. They might branch, but they don’t merge, often resulting in duplicated development efforts and manual copy-and-paste operations.

On top of this, it’s arguable that some source control systems are better at merging than others. Git, for example, was designed to merge, whereas SVN has a reputation for inflicting its fair share of agony on those trying to merge files.

Ultimately, though, the ability to branch and merge is crucial to any organization that needs to expedite their projects and work on those projects in parallel. The mantra is to merge early and often. If you do so, then it’s not usually too painful.

Summary

We’ve discussed many of the primary benefits of a source control system, and taken an initial, largely conceptual, look at the components (repository and working folder) and processes (branching and merging) of such a system, which help us realize these benefits.

Application developers have been reaping these benefits for many years, but database developers are only just starting to catch up. It’s true that database development is different from application development and often database developers have to remind the rest of the development team of those differences, but that does not mean they cannot benefit from a proven system to store and manage files, track changes, work with different versions, and maintain a historical record of the file’s evolution.

Let’s block ads! (Why?)

SQL – Simple Talk

Outta control

 Outta control

Thanks Tommy

 Outta control

About Krisgo

I’m a mom, that has worn many different hats in this life; from scout leader, camp craft teacher, parents group president, colorguard coach, member of the community band, stay-at-home-mom to full time worker, I’ve done it all– almost! I still love learning new things, especially creating and cooking. Most of all I love to laugh! Thanks for visiting – come back soon icon smile Outta control

Let’s block ads! (Why?)

Deep Fried Bits

Document control practices in the age of HIPAA

Information is the lifeblood of any organization, but having access to too much — especially personally identifiable…

information — can cause problems if the proper document control practices aren’t in place.

It’s now more important than ever to keep information out of the wrong hands. The Identity Theft Resource Center reports that there were 781 data breaches in 2015 — the second-highest year on record since the ITRC began tracking breaches in 2005.

Lawyers, for example, are taking a closer look at the Health Insurance Portability and Accountability Act (HIPAA) privacy rule and their responsibility to keep client health information within their firms private. The fines for noncompliance are enormous, so it is imperative that everyone in firms understands their roles in document control practices.

The HIPAA privacy rule

HIPAA establishes national standards to protect individuals’ medical records and other personal health information. It applies to health plans, healthcare clearinghouses and healthcare providers that conduct certain electronic healthcare transactions. The overall objective is to protect the privacy of personal health information and set limits and parameters on the access and use of such information without patient authorization.

Law firms must make reasonable efforts to protect their clients’ information from anyone who doesn’t require access to that information to do their jobs. This action is called a pessimistic model for document management. Law firms have traditionally operated in an optimistic model for document management, which allows access to pretty much everything. However, the rise of the data breach has most definitely changed the game.

Data privacy: Not just a technology issue

Information protection is not just an IT issue, and data breaches should not be viewed simply as a breakdown in technological controls. Every department — indeed, every employee — has a part to play in the security of information under the HIPAA privacy rule. The reality is that all firms must acknowledge the enterprise-wide disruption that can occur when a data breach is discovered. The firms that prepare ahead of time will not only be able to withstand the data breach, but they can also safeguard their positive reputation for their clients, partners and employees. Making the choice to implement an information governance and security program is the first step toward data protection.

Not Just a Paper Issue

While the majority of data breaches involve electronic files, paper files are also susceptible. Of the data breaches reported by the Identity Theft Resource Center, about 30 of them were deemed to be a paper data breach. For example, one insurance company reported more than 5,000 records were exposed in March of last year. What was the nature of the breach? Eleven people were charged with identity theft and credit card fraud after an employee allegedly printed and shared screenshots of more than 5,000 subscriber profiles. Most of the other paper data breaches reported included vandalism or break-in charges. All of this is a result of poor document management.

Document control: Where to begin

Law firm records managers and information governance professionals should work to develop a program for how information is managed and then communicate the data protection requirements to all attorneys and staff personnel.

A point to remember is data protection applies to both physical and electronic records. Therefore, a proper chain of custody workflow must be part of the data protection requirements. Chain of custody helps organizations understand the who, what, when, where and why of a particular document.

For physical records, barcode tracking and RFID technology are the leading tools in this arena, while a document management system will fit the bill for electronic records since it is able to capture this type of metadata.

Of course, proper security permissions apply no matter if the information is paper or electronic. Physical records can be stored in locked cabinets and encrypted. Electronic records should utilize some sort of encryption software to prevent information falling into the wrong hands. There is no shortage of these types of tools on the market today. 

Let’s block ads! (Why?)


ECM, collaboration and search news and features

Internal Control – From Necessary Evil To Operational Excellence

IoT Isbel QA01 300x200 Internal Control – From Necessary Evil To Operational ExcellenceHyperconnectivity, the concept synonymous with the Internet of Things (IoT), is the emerging face of IT in which applications, machine-based sensors, and high-speed networks merge to create constantly updated streams of data. Hyperconnectivity can enable new business processes and services and help companies make better day-to-day decisions. In a recent survey by the Economist Intelligence Unit, 6 of 10 CIOs said that not being able to adapt for hyperconnectivity is a “grave risk” to
their business.

IoT Isbel QA02 300x200 Internal Control – From Necessary Evil To Operational ExcellenceIoT technologies are beginning to drive new competitive advantage by helping consumers manage their lives (Amazon Echo), save money (Ôasys water usage monitoring), and secure their homes (August Smart Lock). The IoT also has the potential to save lives. In healthcare, this means streaming data from patient monitoring devices to keep caregivers informed of critical indicators or preventing equipment failures in the ER. In manufacturing, the IoT helps drive down the cost of production through real-time alerts on the shop floor that indicate machine issues and automatically correct problems. That means lower costs for consumers.

Several experts from the IT world share their ideas on the challenges and opportunities in this rapidly expanding sector.

qa q Internal Control – From Necessary Evil To Operational ExcellenceWhere are the most exciting and viable opportunities right now for companies looking into IoT strategies to drive their business?

Mike Kavis: The best use case is optimizing manufacturing by knowing immediately what machines or parts need maintenance, which can improve quality and achieve faster time to market. Agriculture is all over this as well. Farms are looking at how they can collect information about the environment to optimize yield. Even insurance companies are getting more information about their customers and delivering custom solutions. Pricing is related to risk, and in the past that has been linked to demographics. If you are a teenager, you are automatically deemed a higher risk, but now providers can tap into usage data on how the vehicle is being driven and give you a lower rate if you present a lower risk. That can be a competitive advantage.

Dinesh Sharma: Let me give you an example from mining. If you have sensored power tools and you have a full real-time view of your assets, you can position them in the appropriate places. Wearable technology lets you know where the people who might need these tools are, which then enables more efficient use of your assets. The mine is more efficient, which means reduced costs, and that ultimately results in a margin advantage over your competition. Over time, the competitive advantage will build and there will be more money to invest in further digital transformation capabilities. Meanwhile, other mining companies that aren’t investing in these technologies fall further behind.

qa q Internal Control – From Necessary Evil To Operational ExcellenceWith the IoT, how should CIOs and other executives think and act differently?

Martha Heller: The points of connection between IT and the business should be as strategic and consultative as possible. For example, the folks from IT who work directly with R&D, marketing, and data scientists should be unencumbered with issues such as network reliability, help desk issues, and application support. Their job is to be a business leader and to focus on innovative ideas, not to worry for an instant about “Oh your e-mail isn’t working?” There’s also obviously the need for speed and agility. We’ve got to find a way to transform a business idea into something that the businessperson can touch and feel as quickly as possible.

Greg Kahn: Companies are realizing that they need to partner with others to move the IoT promise forward. It’s not feasible that one company can create an entire ecosystem on their own. After all, a consumer might own a Dell laptop, a Samsung TV, an Apple watch, a Nest device, an August Smart Lock, and a Whirlpool refrigerator.

It is highly unrealistic to think that consumers will exchange all of their electronic equipment and appliances for new “connected devices.” They are more likely to accept bridge solutions (such as what Amazon is offering with its Dash Replenishment Service and Echo) that supplement existing products. CIOs and other C-suite executives will need to embrace partnerships boldly and spend considerable time strategizing with like-minded individuals at other companies. They should also consider setting up internal venture arms or accelerators as a way to develop new solutions to challenges that the IoT will bring.

qa q Internal Control – From Necessary Evil To Operational ExcellenceWhat is the emerging technology strategy for effectively enabling the IoT?

Kavis: IT organizations are still torn between DIY cloud and public cloud, yet with the IoT and the petabytes of data being produced, it changes the thinking. Is it really economical to build this on your own when you can get the storage for pennies in the cloud? The IoT also requires a different architecture that is highly distributed, can process high volumes of data, and has high availability to manage real-time data streaming.

On-premise systems aren’t really made for these challenges, whereas the public cloud is built for autoscaling. The hardest part is connecting all the sensors and securing them. Cloud providers, however, are bringing to market IoT platforms that connect the sensors to the cloud infrastructure, so developers can start creating business logic and applications on top of the data. Vendors are taking care of the IT plumbing of getting data into the systems and handling all that complexity so the CIO doesn’t need to be the expert.

Kahn: All organizations, regardless of whether they outsource data storage and analysis or keep it in house, need to be ready for the influx of information that’s going to be generated by IoT devices. It is an order of magnitude greater than what we see today. Those that can quickly leverage that data to improve operational efficiency, and consumer engagement will win.

Sharma: The future is going to be characterized by machine interactions with core business systems instead of by human interactions. Having a platform that understands what’s going on inside a store – the traffic near certain products together with point-of-sale data – means we can observe when there’s been a lot of traffic but the product’s just not selling. Or if we can see that certain products are selling well, we can feed that data directly into our supply chain. So without any human interaction, when we start to see changes in buying behavior we can update our predictive models. And if we see traffic increasing in another part of the store in a similar pattern we can refine the algorithm. We can automatically increase supply of the product that’s in the other part of the store. The concept of a core system that runs your process and workflow for your business but is hyperconnected will be essential in the future.

qa q Internal Control – From Necessary Evil To Operational ExcellencePrivacy and security are a few of the top concerns with hyperconnectivity. Are there any useful approaches yet?

IoT Isbel QA03 Internal Control – From Necessary Evil To Operational ExcellenceKavis: We have a lot less control over what is coming into companies from all these devices, which is creating many more openings for hackers to get inside an organization. There will be specialized security platforms and services to address this, and hardware companies are putting security on sensors in the field. The IoT offers great opportunities for security experts wanting to specialize in this area.

Kahn: The privacy and security issues are not going to be solved anytime soon. Firms will have to learn how to continually develop new defense mechanisms to thwart cyber threats. We’ve seen that play out in the United States. In the past two years, data breaches have occurred at both brick-and-mortar and online retailers. The brick-and-mortar retail industry responded with a new encryption device: the chip card payment reader. I believe it will become a cost of business going forward to continually create new encryption capabilities. I have two immediate suggestions for companies: (1) develop multifactor authentication to limit the threat of cyber attacks, and (2) put protocols in place whereby you can shut down portions of systems quickly if breaches do occur, thereby protecting as much data as possible.

Polly Traylor is a freelance writer who reports frequently about business and technology.

download 128 008cba Internal Control – From Necessary Evil To Operational ExcellenceDownload the PDF (867 KB)

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Google’s Voice Access app lets you control Android devices by speaking

Google today announced the beta launch of Voice Access, an app that will let people use speech recognition to control Android devices.

While anyone will presumably be able to use it, it’s designed with specific groups of people in mind — specifically “people who have difficulty manipulating a touch screen due to paralysis, tremor, temporary injury or other reasons,” Eve Andersson, manager of accessibility engineering at Google, wrote in a blog post.

“For example, you can say ‘open Chrome’ or ‘go home’ to navigate around the phone, or interact with the screen by saying ‘click next’ or
‘scroll down,’” Andersson wrote.

Google Voice Access Google’s Voice Access app lets you control Android devices by speaking

Above: Google’s Voice Access app.

Image Credit: Google

In launching Voice Access, Google is the latest company in the past few weeks to emphasize what it’s doing in the area of accessibility. Twitter started letting people submit captions for images they tweet out. Facebook enhanced the screen reader for iOS with automatically generated spoken image captions. Microsoft talked about the Seeing AI app at its Build developer conference. And Apple released videos showing how its iPad tablet helps an autistic person communicate with others.

The interesting thing to point out is how much Google has improved its speech-recognition technology, which draws on artificial intelligence. It’s deployed on many millions of Android devices, and last year Google said that its recognition error rate for Google Voice voicemail transcription had gone down by 50 percent.

Google’s Voice Access app now has “enough testers,” according to the link Google provided to try it out. Look for Google to launch the app out of beta in the future.

Google’s innovative search technologies connect millions of people around the world with information every day. Founded in 1998 by Stanford Ph.D. students Larry Page and Sergey Brin, Google today is a top web property in all major glob… read more »

VB Profile Logo Google’s Voice Access app lets you control Android devices by speakingNew! Track Google’s Landscape to stay on top of the industry in 3 minutes a day. Understand the entire ecosystem, monitor innovation, and track deal flows. Learn more.

Get more stories like this:  twitter Google’s Voice Access app lets you control Android devices by speaking  facebook Google’s Voice Access app lets you control Android devices by speaking


Let’s block ads! (Why?)

Big Data – VentureBeat

ECM software's false tradeoff: User access vs. IT control

TTlogo 379x201 ECM software's false tradeoff: User access vs. IT control

As the workforce gets more mobile and dispersed, enterprise software has to satisfy an increasingly difficult balance…

between accessibility and control.

While workers need software that is centralized and makes files easy to access, enterprises need to be able to lock down that information, protect it and prevent it from being leaked, hacked or left on a tablet in an airport. That’s a tall order for software: to make files easily accessible without lots of logins and firewalls, while at the same time ensuring that enterprise data is secure and protected.

Add to this management challenge the fact that enterprise content management (ECM) options have proliferated during the past several years. Enterprises have a dizzying array of ECM software tools to choose from, which is exacerbated by the often diverse needs presented by different portions of any given organization. It then begs the question: Can today’s enterprise get the value from ECM technologies with the largely cloud-based, mobile-enabled population?

ECM software: Disruptive force or here to stay?

It’s no secret that there are numerous services that allow organizations to address various capabilities of ECM. In particular, simple file sharing, through services like Box, Google and Dropbox, has changed the way many companies exchange information and collaborate among workers. Easy-to-use, Web-based and mobile-enabled file-sharing services provide ways for employees across geographies and computing environments — mobile versus PC versus Web. These services greatly improved the accessibility of content for workers — regardless of device, location or company affinity. But, among other issues, they have also created fragmentation in the document lifecycle management process. In making it simple to drop files in a highly accessible location, the content is often orphaned from governance, access security and automated management policies.

The goal is to rally around a single technology.

Legacy ECM software severely restricted access to content, largely based on access control lists or even simple licensing restrictions. By contrast, cloud-based ECM tools, with few ties to corporate identity software, created the opportunity to expose content to virtually anyone in the world. Account owners could dynamically create a repository, add content and enable individual access with little trouble — even incorporating federated identity solutions such as Yahoo IDs, OpenID, Facebook accounts and Microsoft (Live) accounts.

What should be clear to most technology and business leaders is that these cloud-based ECM technologies are here to stay. Organizational efforts to control their use have yielded little success. All further efforts should instead focus on incorporating these tools into the enterprise as part of the digital workplace. The first step should be identity management.

Creating a unified, controllable, secure and integrated identity management system should be paramount. One of the key factors in cloud ECM tool adoption is ease of use; users have turned to services like Box because the barriers to entry are low, with no virtual private network access or extensive login required. The top draw is the ability to easily gain access to these technologies, regardless of location and device. Organizations like Google, Microsoft and Amazon have begun to offer scalable services for creating unified identity management that ties external and internal identities, including the ability to use those identities for accessing commercial, cloud-based ECM services. This means organizations can create secure access without burdening users with arcane, difficult login processes.

Once a company establishes a manageable and universal identity management approach, it needs to create a findability solution. Often, organizations use search tools to enable findability. While that’s a good start, numerous findability approaches need to be implemented together, where the content repository and content search are intertwined. Enabling findability enables a variety of other efforts as well, such as content personalization, content discovery and the effective application of metadata to content objects. Once a findability technology and methods are in place, companies give employees an easy way to find content — a common complaint — and create mechanisms to allow regulated firms to adhere to regulatory and policy-driven compliance requirements — e.g. e-discovery and Sarbanes-Oxley Act requirements, for example.

Finally, apply real and integrated content rights management to ECM tools. Security applied at the repository level, regardless of the repository, is neutered when the content leaves the store. Organizations that need to provide uninterrupted content governance and security must implement a rights management approach that can be attached to individual content items. Information rights management technologies prescribe what each recipient or content consumer can do with the content — from no access, to full edit rights, to view only without printing or saving capabilities.

Extracting value from ECM software

There will be organizations that may not be able to take advantage of cloud-based ECM tools; however, they represent the minority. Many organizations have opportunities that go unrealized. There are ways to ensure the benefits of cloud and mobile enterprise content management are captured.

The first step is managing change. More often than not, migrating to newer technologies is simply a matter of change management — getting organizational standards to accommodate the change and encouraging the right sort of behavior from individual employees. Enterprises should focus on small but effective shifts in work to highlight advantages and minimize business disruption. An easy one is simply making repositories and content interaction truly mobile — create a compelling mobile experience.

Once you have a change plan in place, identify commodity workloads — e.g., file sharing — that can yield perceived value without creating undue risk. While many firms already use a variety of file-sharing tools, the goal is to rally around a single technology, with the corresponding change management mechanisms to produce a standard within the organization. Employees should think about file sharing as synonymous with this tool.     

Finally, address regulatory or policy challenges head on. It can often be easier to simply revert to “we’re regulated” or “legal won’t approve this” as excuses for not adopting ECM software. As mentioned earlier, there are real and necessary restrictions in a few circumstances. However, most cloud vendors and the associated ECM tools have addressed these regulatory hurdles. It’s time to put the benefits of cloud environments into action.

Let’s block ads! (Why?)


ECM, collaboration and search news and features