Tag Archives: Control

Ctrl-labs’ armband lets you control computer cursors with your mind

Controlling a mouse pointer with your mind may sound like science fiction, but Ctrl-labs, a startup based in New York City, is working hard to make it a reality.

I recently swung by the company’s new digs in Manhattan — a high rise suite overlooking Herald Square, a few blocks south of the Theater District, overlooking Herald Square. It had been two weeks since Ctrl-labs’ employees moved into the Midtown office, lead scientist Adam Berenzweig told me, and the smell of fresh paint still hung in the air.

“We haven’t finished unpacking the furniture,” he said.

Ctrl-labs can afford the upgrade. In June, it raised $ 28 million in an investment round led by Lux Capital and GV (formerly Google Ventures), the venture capital arm of Alphabet (Google’s parent company). The two join a long, growing list of high-profile backers that includes the Amazon Alexa Fund, Paul Allen’s Vulcan Capital, Peter Thiel’s Founders Fund, Tim O’Reilly, Slack founder and CEO Stewart Butterfield, Warby Parker CEO Dave Gilboa, and others.

What convinced those tech luminaries to fund the three-year-old neuroscience and computing startup, I’d soon find out, feels a little bit like magic.

Finding the neural link

Thomas Reardon, the founder and CEO of Ctrl-labs (formerly Cognescent), was something of a child prodigy. He took graduate-level math and science courses at MIT while in high school and spearheaded a project at Microsoft that became Internet Explorer. A few years later, he enrolled in Columbia University’s classics program, where he studied neuroscience and behavior and went on to earn his Ph.D.

It was in 2015 at Columbia that Reardon, along with fellow neuroscientists Patrick Kaifosh and Tim Machado, conceived of Ctrl-labs and its lofty mission statement: “to answer the biggest questions in computing, neuroscience, and design.” After three years of research and development, the team produced its first product: an armband that reads signals passing from the brain to the hand.

The armband — a bound-together collection of small circuit boards, each soldered to gold contacts meant to adhere tightly to forearm skin — is very much in the prototype stages. A ribbon cable connects the contacts to a Raspberry Pi in an open plastic enclosure, which in turn connects wirelessly to a PC running Ctrl-labs’ software framework.

It’s deceptively unsophisticated.

 Ctrl labs’ armband lets you control computer cursors with your mind

Above: A view from Ctrl-labs’ new offices in New York City.

Image Credit: Kyle Wiggers / VentureBeat

Berenzweig thinks of the armband as an interface much like a keyboard or mouse. But unlike most peripherals, it uses differential electromyography (EMG) — an effect first observed in 1666 by Italian physician Francesco Redi — to translate mental intent into action.

How does it do that? By measuring changes of electrical potential, which are caused by impulses that travel from the brain to hand muscles through lower motor neurons. This information-rich pathway in the nervous system comprises two parts: upper motor neurons connected directly to the brain’s motor center, and lower axons that map to muscle and muscle fibers. Neurotransmitters run the length of that long neural pathway and turn individual muscle fibers on and off — the biological equivalent of binary ones and zeros.

The armband is quite sensitive to these. Before Berenzweig kicked off a demo of the wristband, he made sure to put distance between it and a metal pushcart nearby.

“It acts like an antenna,” he said, “so it’s susceptible to interference.”

While the armband’s 16 electrodes monitor the electric fields generated by nerves in the wearer’s arm, Ctrl-labs’ software ingests the data, and with the help of a machine learning algorithm trained using Google’s TensorFlow, distinguishes between the individual pulses of each nerve.

Berenzweig, who had put on an armband before I arrived, showed me on a PC an EKG-like graph of colored lines representing each contact. As he lifted a digit, one of the lines tremored slightly. Then he let his hand rest at his side, motionless. It tremored again.

 Ctrl labs’ armband lets you control computer cursors with your mind

Above: Ctrl-labs’ prototype armband.

Image Credit: Kyle Wiggers / VentureBeat

The wondrous thing about EMG, Berenzweig explained, is that it works independently of muscle movement; generating a brain activity pattern that Ctrl-labs’ tech can detect requires no more than the firing of a neuron down an axon, or what neuroscientists call action potential.

That puts it a class above wearables using electroencephalography (EEG), a technique that measures electrical activity in the brain through contacts pressed against the scalp. EMG devices draw from the cleaner, clearer signals from motor neurons, and as a result are limited only by the accuracy of the software’s machine learning model and the snugness of the contacts against the skin.

That’s not to suggest they’re perfect. Waterloo, Ontario-based startup Thalmic Labs began shipping an EMG armband in 2013 — the Myo — that can detect muscle movements, recognize gestures and joint motion, and map neural signals to keys on a keyboard and video game hotkeys. But many of the less-than-stellar reviews mention the inconsistency of its gesture recognition.

Ctrl-labs prototyped its machine learning algorithms with Myo before developing its own hardware, and Berenzweig owns one personally. But the current iteration of Ctrl-labs’ armband is far more precise than the Myo, and can work anywhere on the forearm or upper arm. Future versions will work on the wrist.

He explained this to me as he typed a few commands into a Linux terminal and fired up the first demo. A likeness of a human hand appeared onscreen and Berenzweig manipulated it with his fingers, their movement mirroring that of his digital doppelganger.

Then he strapped the bracelet on my arm. I had worse luck — the thumb on the computerized hand reflected the motions of my thumb, but the index and pinkie finger didn’t — they remained stiff. Berenzweig had me recalibrate the system by angling my wrist slightly, but to no avail.

He chalked it up to the demo’s generalized machine learning model. Experimental versions of the software, he said, are performing much better.

In a second demo, I watched as Berenzweig moved a computer cursor toward a target. Unlike in the first, movements in the demo actively train a neural net, tuning the system to each user’s neural idiosyncracies.

When it came time again for my turn, I wasn’t exactly sure how to control it. But after a trepidatious start in which the cursor made maddening laps around the target, coming close to it but not quite touching it, the algorithm — and by extension, precision — improved drastically. Within just a few seconds, moving the cursor with thought became almost second nature, and I was able to steer it up, down, and to the left and write by thinking about moving — but not actually moving — my hand.

Berenzweig believes this kind of algorithmic learning, which is crucial to the system’s accuracy, could be gamified in other ways. “We’re trying to find the right way to approach it,” he said.

An eye on VR — and smartphones

Ctrl-labs’ armband won’t be relegated to the lab for much longer. By the end of this year, the company plans to ship a developer kit in small quantities and make available software that will expose the band’s raw signals. The final design is in flux, and at least a few will be manufactured in-house.

Pricing hasn’t been decided, though Berenzweig said it will be higher than the eventual commercial model’s price point.

Around the corner from the demo and adjacent to a room with a MakerBot (which the team uses to quickly prototype shells), Berenzweig showed me a poster board of concepts and potential form factors. Some looked not unlike Android Wear smartwatches — while the developer kit will have to be tethered to a PC for some processing, he said, the processing overhead is such that all of the hardware will eventually be self-contained.

As for what Ctrl-labs expects its early adopters to build with it and for it, video games top the list — particularly virtual reality games, which Berenzweig thinks are a natural fit for the sort of immersive experiences EMG can deliver. (Imagine swiping through an inventory screen with a hand gesture, or piloting a fighter jet just by thinking about the direction you want to fly.)

But Ctrl-labs is also thinking smaller. Not too long ago, it demonstrated to Wired a virtual keyboard that maps finger movements to PC inputs, allowing a wearer to type messages by tapping on a tabletop, and at the 2018 O’Rielly AI conference in New York City, Reardon spoke about text messaging apps for smartphones and smartwatches that let you peck out replies one-handed. Berenzweig, for his part, has experimented with control schemes for tabletop robotic arms.

“You know how early versions of Windows used to ship with Minesweeper and Windows sort of became known for it?” We need to find our Minesweeper,” he said.

 Ctrl labs’ armband lets you control computer cursors with your mind

Above: A few Ctrl-labs armband engineering samples.

Image Credit: Ctrl-labs

One field of research Ctrl-labs won’t be investigating is healthcare — at least not at first. While Berenzweig agrees that the tech could be used to help stroke victims and people with degenerative neural diseases like amyotrophic lateral sclerosis (ALS), he says those aren’t applications the company is actively exploring. Ctrl-labs is loath to submit its hardware for approval by the Federal Food and Drug Administration, a potentially years-long process. (Reardon’s stated goal is to get a million people using the armband within the next three to four years.)

“We’re focusing on consumers right now,” Berenzweig said. “We think it has medical use cases, but we want it to be a consumer product.”

By the time Ctrl-labs hits retail store shelves, it’ll likely have competition. Thalmic Labs is developing a second-generation EMG armband, and a new a venture funded by SpaceX and Tesla head Elon Musk, NeuraLink Corp, aims to develop mass-market implants that treat mood disorders and help physically disabled people regain mobility.

Not to be outdone, Facebook is researching a kind of telepathic transcription that taps the brain’s speech center. In September 2017 at the MIT Media Lab conference, project lead Mark Chevillet told the audience that it plans to detect brain signals using noninvasive sensors and diffuse optical tomography. Effectively, it would allow a user to type words simply by thinking them.

Berenzweig is convinced that Ctrl-labs’ early momentum, plus the robustness of its developer tools, will help it gain an early lead in the brain-machine interface race.

“Speech evolved specifically to carry information from one brain to another. This motor neuron signal evolved specifically to carry information from the brain to the hand to be able to affect change in the world, but unlike speech, we have not really had access to that signal until this,” he told Wired in September 2017. “It’s as if there were no microphones and we didn’t have any ability to record and look at sound.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Collaboratively Innovate with Version Control in Spotfire Analytics

Version control (also known as revision control or source control) for DevOps and software development is a well-adopted practice to eliminate the risk of manual errors and enable teams to work together to develop code. However, many BI analysts and developers aren’t taking advantage of the benefits of version control when it comes to developing analytics reports. Typically, analytics reports are made up of a group of individually-created versions, which are saved and merged manually without source control as a backup. So, how should BI analysts and developers adopt version control and what challenges could this best practice address?

Learning from Software Development Best Practices

One of the most important best practices for software development is “commit early, often, and with clear messages to your future self or others.” In other words, take baby steps for success and commit after even small crunches of valid code to provide an easy-to-follow history tracking with a detailed changelog. This comprehensive history serves as the basis for deciding on the final version of code.  It’s also important in the case of an emergency to discover what went wrong compared to previous versions.

Here are several other best practices for maintaining version control when developing code:

Write Descriptive Commit Messages

In the moment, it’s easy to generate a creative abbreviation. However, these abbreviations need to have meaning now and in the future to both yourself and others. Writing descriptive commit messages helps provide enough information that anyone can understand.

Keep File Naming Conventions Consistent

An easy-to-recognize file structure encourages teamwork and establishes a single point of understanding between team members. Elements should be recognizable at a glance to accelerate collaboration and eliminate time that could be wasted trying to identify files in the repository.

Merge Versions Seamlessly

Merging can be useful when trying to combine the work of several team members with previously separated codes and allows for accelerated delivery cycles.

Version Control for BI Analysts & Developers

BI analysts and developers face these common challenges when building analytical reports:

– Coordinating development, changing enterprise analytics workflows and managing multiple report versions, especially with several contributors, is risky due to the possibility of human error.

– Development cycles and production releases without appropriate governance can lead to mistakes in the final release.

– A lack of standardized development processes can cause conflicts, bottlenecks to collaboration and wasted time due to manual verification.

With these challenges in mind, let’s take a look at how leveraging the best practices of software development can improve your BI and analytics practice. Imagine a team that consists of three business analysts is just about to kick off an Agile project where quick and accurate deliverables are key.

Version Control in Spotfire Collaboratively Innovate with Version Control in Spotfire Analytics

Peter from Germany                   Maria from Spain                        Paul from the UK

Although Peter, Maria, and Paul are all very talented Spotfire experts in data analytics and setting KPI targets, they all have different working habits and work in different parts of the world across various time zones. Under these circumstances, it is even more important to align their versions transparently and consistently by integrating version control best practices into their daily routine, including:

– Governing production releases

– Restoring mission-critical deployments in production

– Tracking and coordinating changes by Report Template Authors

– Merging work efficiently

By adopting these best practices, BI analysts and developers can achieve several business benefits:

Accelerated Quality Report Delivery through Better Collaboration

With history tracking, change log generation, and detailed report comparison, multiple BI developers can work on the same report at the same. With this type of collaboration, customers can receive a quality report at an accelerated rate.

Safeguarded Brainstorming & Elimination of Human Mistakes

With the ability to restore previous versions, users can brainstorm and try ideas safely while building dashboards. BI and analytics developers can eliminate costly human errors by adopting version control.

Enhanced Teamwork across Different Locations and Time Zones

Merging independently-developed versions of an analytical report can be risky. With version control, all changes are tracked and users can share and edit the same document at the same. Ultimately, version control takes collaboration to the next level and increases effective work across different locations and time zones.

These are just some of the benefits that BI analysts and developers can take advantage of by adopting a software development best practice, such as version control. To learn more about this topic and EPAM’s BI Version Control Accelerator for TIBCO Spotfire, a turn-key solution that accelerates the delivery of Spotfire reports by 4-5x, join our webinar series or contact us at enterpriseanalytics@epam.com to request a free demo!

Let’s block ads! (Why?)

The TIBCO Blog

Version Control: Don’t let your versioning get out of control

Version Control Don’t let your versioning get out of controll 351x200 Version Control: Don’t let your versioning get out of control

Losing control of changes in a file is the stuff of nightmares. You know the deal: You craftily create copy, send it around for review amongst your peers, and then everyone starts sending back separate files with their comments. If you accept their feedback, you start implementing the changes in one of the documents – your ‘master’ – as you continue to bring in additional comments and edits from other people. But if the feedback loop is long and you’re at all distracted, it can be terrifyingly easy to lose track of which document is the master.

I’m sure you’ve lived the nightmare of overwriting important changes or renaming a file incorrectly, only to realize the mistake many revision rounds later. I have, and it’s crazy-making. Please see my other feedback/edits, says a colleague. I swore I made those changes, I think.

Worse yet: The thing is printed or published before you catch it; you only realize later that it was a non-final version. Horror.

Version control – or lack thereof – can take down any well-meaning marketing organization. Losing track of what round you’re on, which is the working version, and where changes went, is maddening, frustrating, and inefficient.

So how to wrest control of this beast? Let’s talk about how to keep track of your versions … where to store them, how to name files, how to follow changes, etc.

Establish file-naming conventions

One simple but tremendously effective first step is to establish file-naming conventions across your organization.

If you take away nothing else from this article, heed this: Establish a simple but consistent naming system for your files and stick to it. This may include a combination of numerals or initials or dates. And you must use it militantly, with each and every revision.

Personally, I use different methods depending on how many people are involved in a project. If it’s just me working on a file, like the draft of this blog, I keep a simple file-naming strategy. I label everything with version number only: “v1,” “v2,” etc.

But once more people get involved, I get more granular. When multiple people are reviewing, I find it handy to append a file name with initials so I can easily see whose review I’m working with or scan my desktop to see who’s already given input. I also like to add a date to the file name, such as FILENAME_060717. (True, most word processing programs provide a revision history in the file details – but that doesn’t stick if you later re-open a file and make a tiny change. The clock resets.) And when working with extensive, fast-paced edits – inputting multiple changes in a day – I like to timestamp the actual file name with “morning revision,” “afternoon revision,” or even “2pm revision,” etc., to keep it all straight.

Set up a clear creative workflow

Another step to gaining control of the beast is to set a strategy of who reviews what, and at what stage. Having some kind of plan that is documented – fancy or intricate – can help you keep control of files. These are the hallmarks of a creative workflow, which is another process I advocate. In fact, you may be able to marry the two into one process, an efficient fell swoop. But again, like the naming convention, you really need to stick to the plan and enforce it.

Get everyone on the same page

So you have a file naming system and a plan. That’s great – for you. But if you need to loop others into that plan, it’s critical to get everyone working on the same proverbial page. This means implementing rules and process, not just for yourself, but for the whole organization.

Establish an efficient cross-functional workflow and get someone to wear the Type A hat. This person can document the process, send instructions to colleagues about how to name and label their files, hold mini training sessions, and spot check to ensure and enforce that the process is adhered to. Doing the stickler-for-details routine is not a lot of fun, but it will save time in the long-run.

A “Word” on track changes

There’s nothing that sparks ire in me like receiving a clean file from a contributor. I think, “Hey, wow – they had no changes” … only to start reading through and realize, with horror, that they have made legions of edits without notating them. It’s when I’m in the weeds, reading line by line to compare the old and the new to catch changes, that smoke comes out of my ears.

Change-tracking tools, like the one in Word or Google Docs, provide an easy way to annotate, edit, and mark up. This helps me quickly see who made what change and when.

I don’t just track changes when working with others. I turn the function on while doing my own work to keep tabs on what I’m doing. I love that the program keeps a record of my changes and easily enables me to stet and revert if my edits aren’t so stellar after all.

A cousin of track changes is the comments tool, which I equally love. I use comments to remind myself of status, things I need to come back and complete, or thought process. I also use comments to explain to others why I’m making certain wording changes, why I rewrote a passage for tone, and so on.

I know what you’ve heard in the past about track changes – and it’s true. It’s a visually overwhelming beast to wrangle. But the programs have evolved in a way that makes this manageable. Now it’s possible to mark changes but hide them as you go, so your page stays clean and easy to read and you can focus on just the flow and look of the copy.

Caution: Don’t stamp it “final” until it really is

There’s something magical – and not in a good way – that seems to happen when I append a file name with the word “final.” It’s like a cue to Murphy’s Law: The moment I proclaim a file to be finished, inevitably a host of edits and further changes trickle in.

To trick myself (and Murphy), I’ve learned to not use the word “final” on my files until the thing has shipped, launched, and left the proverbial building. Instead, I keep on keeping on with the naming conventions above so I know which version is the latest. Only later, once all is done, do I come back and make a copy of the One That Went To Press and rename it “Final.”

How to get to “final” stage? Provide a window for internal client review. And then, when it’s closed, it’s closed. Unless it’s a legal issue, ship when you say and stick to it. Your clients will fall in line.

Create an archive

Archiving – on your desktop or a cloud drive – is another great way to keep track of versions.

As I work on multiple files, I stash them in a folder on my desktop called “Previous Versions.” This way, when I open my main working folder, I only see the absolute latest. It helps keep me from being overwhelmed or from scanning through and grabbing the wrong file. It also makes for a nice archive. When I work this way, I create a folder that captures all of the edits and revisions along the way. So, if I ever need to revert to a previous iteration, it’s easy to find.

When all is said and done, I may have three or 300 versions of revised files. Much like tax records, I like to hold on to these for a spell … at least until my files are out the door. This way, I have a clear archive of all changes that have been made throughout the revision process – something to look back to should I need to revert to a previous iteration, see where a strange change or unfamiliar edit was introduced, etc.

I also love working with cloud drives – like Google Drive or SharePoint or Dropbox. In fact, I almost exclusively work on files out of these systems these days. That way I know I’m always working on the latest version of a file, and I can do my work from any computer, versus having to email myself copies of the files. Multiple people can contribute, edit, or revise simultaneously. And it keeps an automatic archive.

Parting thoughts: Be alert when making your edits

One last word of advice: As much as possible, try to limit your distractions when working on files and changes. Close the door, drink some coffee, and turn the music off (or on, if that helps you focus). There’s nothing worse than making a host of excellent changes, only to realize later that you made them to the wrong version of a file and need to redo your work. Or, you’ve saved the changes in a strange place and the file has now gone missing. So pay attention.

And that’s final.

What is your favorite way to keep version control? Or what file-related horror story do you have to share?

Let’s block ads! (Why?)

Act-On Blog

Shewhart Control Charts and Trend Charts with Limits Lines in TIBCO Spotfire

Shewhart control charts are popular charts commonly used in statistical quality control for monitoring data from a business or industrial process. The goal of a statistical quality control program is to monitor, control, and reduce process variability. These charts often have three lines—a central line along with upper and lower control limits that are statistically derived. They enable the user to monitor a process for shifts, relative to a baseline historical period, that alter the location or variability of the measured statistic. There are a number of different types of charts, each with their own formula for calculating control limits and methods of applying rules to determine whether the process is in or out of control.

One common set of control charts consists of a pair of charts:

1. The individual chart which displays the individual measured values

2. The moving range chart which monitors the process variability.

Uses of control charts

—Monitor a process for special causes of variation that can occur. For example, a flood alarm that monitors water level.

—Control the location and variability of a process metric and not allow more process variation to occur than was present when the control limits were set. Often, a process capability study is performed prior to setting control limits, to ensure that the process is capable of performing within the specification limits. Specification limits define the region within which the metric must remain for proper functioning of the process or product.

—Drive continuous process improvement. Control charts identify out of control points, whose causes are identified and eliminated. Limits are then recalculated and tightened and the process is repeated.

Popular types of control charts

Run Chart
x̅ and S Chart
x̅ and R Chart
Individual and Moving Range Charts
p-, np-, c-, and u-charts
UWMA and EWMA Charts
CUSUM Charts
Multivariate Control Charts

How to create control and trend charts with limits lines using Spotfire

Creating lines with lines and curves property

1. Control limits or specification limits may have predetermined values which can be set using the fixed value line option.

2. Predefined aggregated values can be used for creating lines like upper outer fence. The upper outer fence (UOF) is defined as the threshold located at Q3 + (3*IQR) where Q3 is third quartile and IQR stands for interquartile range

Control lines 1 Shewhart Control Charts and Trend Charts with Limits Lines in TIBCO Spotfire
3. Property values can be used to specify dynamic control lines where it can be changed by a user, a script running in background, or a data function. Properties updates can be triggered by a user-friendly interface like selecting Sigma level and metrics.

ControlLine2 Shewhart Control Charts and Trend Charts with Limits Lines in TIBCO Spotfire
4. Custom expressions, which can be easily modified, help create specific calculations for a control line. They can be as simple as Avg([Y]) + 3.0*StdDev([Y]). It can also be combined with properties.

Calculation-based lines

Sometimes, lines can a be complex equation: Y(Control Line) = C2 + (D/p) * cos [(p/D) * X + C1]

In this case, C1 and C2 are constants which can be properties in Spotfire, and D—which is drag in this equation—can be a column. Spotfire Math functions can be used to determine the cosine of the argument. The weight per length of line p can be another calculated column. In Spotfire, an expression may look like this where $ symbol indicates properties:

$ {RunYieldsTarget} +([Metric5]/[Metric6])*Cos([Metric5]/[Metric6]*[Metric1]+$ {Rpk.calculated})

Moving range chart

In order to create moving ranges, Spotfire LastPeriods OVER function is very useful. It includes the current node and the n – 1 previous node, which can be used to calculate moving averages.

Avg([Metric5]) OVER (LastNode([Axis.X,n]))/n

This function calculates n period average where n is an integer. If X-axis is defined as month, it will provide three month rolling average.

Control lines from another batch or process

Sometimes control lines can be from another golden batch or process.

Curve from another data table allows users to specify a custom curve expression, which makes use of parameters available in a specified data table or golden batch.

Line from column value can display lines based on X and Y coordinates that already exist in two columns of your analysis. For example, coordinate values could be calculated from the input data using a statistical calculation from a calculated column or even a data function, and the output result could be presented as coordinate values for a curve.

All these building blocks can be combined nested and morphed into a beautiful dashboard.

Process Control1 Shewhart Control Charts and Trend Charts with Limits Lines in TIBCO Spotfire

Try out Spotfire for yourself and see how easy it is to create insightful and beautiful dashboards from your data. Check out other Tips and Tricks blog posts to learn more.

Let’s block ads! (Why?)

The TIBCO Blog

Police Riot Control Vehicle

 Police Riot Control Vehicle

Mobile little fortress.

Antifa’s worst nightmare

March 21, 2017

Core Database Source Control Concepts

This article takes a conceptual approach to explaining core source control concepts, using diagrams and anecdotes as necessary to describe how all the pieces fit together.

  • Why source control – the basic purpose of source control and the benefits it offers, including:

    • maintaining an “audit trail” of what changed, who made the change, and when.
    • allowing team collaboration during development projects.
  • Basic source control components and concepts, including:
    • the repository, which stores the files and maintains their histories.
    • the working folder, which provides an isolated environment (sandbox) for creating, editing, and deleting files without affecting those in the repository.
    • workflow concepts, such as versioning, branching and merging files.

Why Source Control?

A source control solution provides users with the tools they need to manage, share, and preserve their electronic files. It does so in a manner that helps minimize the potential for conflicting changes and data loss, in other words one user inadvertently overwriting another user’s changes when multiple users work on the same files. If one user changes the name of a column while another one updates its data type, the source control system will alert us to the conflict and allow the team to decide the correct outcome, via a merge operation.

Critically, a source control solution maintains a version history of the changes made to every file, over time, and provides a means for users to explore those changes and compare different file versions. This is why we often refer to a source control system as a version control system, or VCS for short.

These days, few software developers would consider building an application without the benefits that a VCS offers, but its adoption for databases has been slower. This is mainly because of the nature of a database, which must preserve its “state” (i.e. business data) between database versions. It means that having, in source control, the files that define the schema objects and other code is not the whole story. When upgrading a “live” database to a newer version, as it exists in the VCS, we can’t just tear down objects that store data and re-create them each time.

Nevertheless, despite this added complication, there is no reason why we should exclude databases from our source control practices. In fact, a VCS can be one of a database developer’s most valuable tools and the foundation stone for an effective and comprehensive change management strategy.

At its heart, the purpose of a VCS is to maintain a change history of our files. As soon as we enter a new file into source control, the system assigns it a version. Each time we commit a change to that file, the version increments, and we have access to the current version and all previous versions of the file. This versioning mechanism serves two core purposes:

  • Change auditing

    • Compare versions, find out exactly what changed between one version and another.
    • Find out who made the change and when; for example, find out when someone introduced a bug, and fix it quickly.
  • Team collaboration
    • Inspect the source control repository to find out what other team members have recently changed.
    • Share recent changes.
    • Coordinate work to minimize the potential for conflicting changes being made to the same file.
    • Resolve such conflicts when they occur (a process called merging).

By maintaining every version of a file, we can access the file as it existed at any revision in the repository, or we can roll back to a previous version. Source control systems also allow us to copy a particular file version (or set of files) and use and amend them for a different purpose, without affecting the original files. This is referred to as creating a branch, or fork.

I hope this gives you a sense of the benefits a source control system offers. We’ll look at more as we progress. The following sections paint a broad picture of the source control components and workflow that enable this functionality. We review the most important concepts in terms of the content creation, the storage, and the tracking strategies they enable, but we won’t go into too much detail.

The Source Control Repository

At the heart of a VCS is the repository, which stores and manages the files, and preserves file change histories.

Centralized source control systems support a single, central repository that sits on a server, and all approved users access it over a network. In distributed source control systems, each user has a private, local repository, as well as (optionally) a “master” repository, accessed by all users. We assume a centralized model for the conceptual examples in this article.

Regardless of the model used, when we add files to the repository, those files become part of a system that tracks who has worked on the file, what changes they made, and any other metadata necessary to identify and manage the file.

Repository storage mechanisms

The exact storage mechanism varies by source control system. Some products store both the repository’s content and its metadata in a database; some store all content and metadata in files (with the metadata often stored in hidden files); other products take a hybrid approach and store the metadata in a database, and the content within files.

The repository organizes files and the metadata associated with each file, in a way that mirrors the operating system’s folder hierarchies. In essence, the structure of files and folders in a source control system is the same as in a typical file management system such as Windows Explorer or Mac OS Finder. In fact, some source control systems leverage the local file management system in order to present the data in the repository. Figure 1 shows the server repository structure for a BookstoreProject repository, with the Databases folder expanded to reveal a Bookstore database. Don’t worry about the details of this structure yet, as we’ll get to them later.

word image 52 Core Database Source Control Concepts

Figure 1: Typical hierarchical folder structure in the repository.

What sets a source control repository apart from other file storage systems is its ability to maintain file histories. Everything we save to a repository is there forever, at least in theory. From the point that we first add files to the repository, the system maintains every version of every file, recording every change to those files, as well as to the folders that form the repository structure.

Source control of non-text files

Ideally, a VCS manages and tracks the changes made by all contributors to every type of file in the system, whether that file is a Word document, Excel spreadsheet, C# source code, or database script file. In reality, however, traditional source control solutions usually track changes only on text files, such as those used in application and database development, and tend to treat binary files, such as Word or Photoshop files, as second-class citizens. Even so, most solutions maintain the integrity of all files and help manage processes such as access control and file backup.

The Working Folder

Most users care less about how their source control system stores the file content and metadata, and more about being able to access and work on those files. Each repository includes a mechanism for maintaining the integrity of the files within their assigned folder structure and for making those files accessible to authorized users.

However, to be able to edit those files, each user needs a “private workspace,” a place on his or her local system, separate from the repository, to add, modify, or delete files, without affecting the integrity of the files preserved in the repository.

Most VCSs implement this private workspace through the working folder. The working folder is simply a folder, and set of subfolders, on the client computer’s file system, registered to the source control repository and structured identically to the folders in the repository. Figure 2 shows the working folder structure for a user of the BookstoreProject repository. This user has copied the entire repository to a working folder called BookstoreProject.

word image 53 Core Database Source Control Concepts

Figure 2: Typical working folder structure.

Each user stores in their working folder copies of some or all of the files in the repository, along with the metadata necessary for the files to participate in the source control system. As noted above, that metadata is often stored in hidden files.

We can update our working folder with the latest version of the files stored in the repository, as illustrated in Figure 3.

 Core Database Source Control Concepts

Figure 3: Copying files from the repository to the working folder.

We can edit the files in the working folder as necessary and then, eventually, commit the edited versions back to the repository. This process of “synchronizing” the working folder with the repository, i.e. updating the local working folder with any changes in the repository and committing any local changes to the repository, works differently from product to product and depends on whether the repository is centralized or distributed.

Regardless of repository type or product, the source control system always keeps the files in the repository separate from changes made to files in the working folder, until the user chooses to commit those changes to the repository.

Versioning and Collaborative Editing

The exact architecture and mechanisms that underpin versioning and collaborative editing vary by VCS, but the basic principles are constant. A user can obtain a local “working copy” of any file in the repository, make additions and amendments to that file, and then commit those changes back to the main repository. At that point, other users can request from the repository the amended version of the file or any of the previous versions. The VCS maintains a full change history for each file, so we can work out exactly what changed from one version to another.

In this section we’ll discuss, at a high level, how the repository maintains these file versions, as users make progressive changes to those files. Notionally, this versioning process is easy in a single-user system. A user updates his or her working folder with the latest versions of a set of files, and then edits those files as appropriate in a suitable client, such as Notepad for a plain text file, Visual Studio for an application file, or SQL Server Management Studio for a database file. The user then commits the changes to the repository, creating new versions of the files.

However, another key function of source control is to enable a group of people to work collaboratively on the set of files that comprise a development project. In other words, the VCS must allow multiple users to modify a file concurrently, while minimizing the potential for conflicts and data loss. Let’s see how source control systems allow for and manage these concurrent changes.

How versioning works

Let’s assume we’ve created a project directory in source control for an Animal-Vegetable-Mineral (AVM) application and that we’ve established a working folder for this project.

Figure 4 depicts the progressive changes to the application over three revisions. Notice that the repository preserves all the changes to the files, with each version assigned a revision number. Note that Figure 4 is not in any way a depiction of how a VCS maintains different file versions internally. It is merely to help visualize the process of how it can allow us to access different file versions, and provide a history of changes to our files over time.

 Core Database Source Control Concepts

Figure 4: Working with files from the AVM project in source control.

In Revision 1, we committed to the repository (from our working folder) two new files, Animals.txt and Vegetables.txt. Revision 1 represents the first and latest file versions in the AVM repository.

We edited Animals.txt to replace skunk with elk, and created a third file called Minerals.txt, and committed the changes to the repository. Collectively, these changes form Revision 2. Vegetables.txt remains unchanged from Revision 1.

Next, we edited Vegetables.txt, changing sprouts to carrots, and edited Minerals.txt, changing potash to pyrite and adding silica. These changes form Revision 3, with Animals.txt unaltered from Revision 2.

Mostly, users are interested in working with the latest folder and file versions in the repository, but we can also request to see the repository as it existed at any earlier revision, with each file exactly as it existed at that point in time. For example, if we were to pull Revision 2, we would get the Revision 2 copies of the Animals.txt and Minerals.txt files, as well as the Revision 1 version of the Vegetables.txt file. In this way, we can build and deploy a specific “version” of the application or database.

Conceptual versus actual source control implementation

In this article, the descriptions of the versioning process are conceptual in nature. An actual source control implementation will vary according to product. It might not store this many different file versions, or it might build a given version by storing a record of the differences (the delta) between a current version and the previous version.

Since developers usually want to ensure they are working with all the most recent versions of their project files, they update their working folders regularly to get the latest versions. When they view their working folders, they are viewing the latest version of the repository and all its folders and files, at the point in time they did their last update.

However, we can also request to view the revision history for the repository (for example, by accessing the repository’s log). Exactly what we see when viewing the revision history depends on the VCS, but it’s likely to include information such as the revision number, the action, who made the changes, and when, and the author’s comment, i.e. a description of the change. Figure 5 shows what the log might look like for the AVM project folder, after the sequence of changes (and assuming two users, Fred and Barb).

 Core Database Source Control Concepts

Figure 5: Storing files in the repository of a source control system.

A VCS, as noted earlier, often stores the differences between each file version, rather than the full file for every version. We refer to each set of differences as the delta. If we request to view a file as it existed at a particular revision, the VCS might, for example, retrieve the last stored complete file and then apply the deltas in the correct order going forward.

Likewise, a VCS will usually provide an easy visual way for users to see a list of what changed between any two revisions in the repository. We call this performing a diff, short for “difference between revisions”.

Collaborative editing

Any source control solution must provide the structure necessary to permit collaboration on different file versions in the repository, while preserving every version of each file.

The potential difficulty arises when more than one user works on the same version of a file at the same time. Let’s say both Fred and Barb have in their respective working folders Revision 2 of our AVM app. Fred edits Vegetables.txt, changing sprouts to carrots and commits the change. At roughly the same time, Barb edits the same file, changing sprouts to peas, and commits the change. What should be the result in the source control repository? If the “last commit wins,” we’d simply lose Fred’s changes from the current version of the file.

Older source control systems (often referred to as “first generation”) get around such difficulties by imposing an isolated editing (or locking) model, whereby only one user at a time can work on a particular version of a file.

Most modern source control systems enable a group of users to work collaboratively on the same version of a file. Referred to as a concurrent editing model, this process allows them to reconcile, or merge, the changes made by more than one user to the same file and, in the process, resolve any conflicting changes.

Isolated editing

A traditional “first-generation” source control solution, such as Source Code Control System (SCCS), developed at Bell Labs in 1972, uses a central repository and a locking model that permits only one person at a time to work on a file.

To use a database metaphor, we can liken the isolated editing model to SQL Server’s pessimistic concurrency model. It assumes, pessimistically, that a conflict is likely if multiple users are “competing” to modify the same file, so it takes locks to prevent it happening.

A typical workflow in source control might look as follows:

  1. Fred performs a check-out of the latest version of the Animals.txt file, in this case, Revision 1.

    1. If the file does not exist in Fred’s working folder, the source control system will copy it over.
    2. The source control system “locks” the file in the repository. (The exact “locking” mechanism varies by system.)
  2. Fred edits the file in his editor of choice. He deletes skunk and adds elk.
  3. Fred saves the changes to his local working folder.
  4. Barb attempts to check out Animals.txt, but cannot because Fred has it locked. Although Fred saved his changes locally, he has yet to perform a check-in to the repository and so the file remains locked and no one else can check it out.
  5. However, Barb can download Animals.txt to her working folder as a read-only copy, so she can at least see the latest version as it exists in the repository.

Figure 6 provides a pictorial overview of this process, to this point.

 Core Database Source Control Concepts

Figure 6: Allowing only one user at a time to modify a file.

From this point, the workflow might proceed as follows:

  1. Fred performs a check-in, which copies the updated file from his working folder into the repository.

    1. The repository stores Revision 2 of Animals.txt, while retaining Revision 1.
    2. The source control system releases any locks, so both versions of the file are available for checkout.
  2. Barb can now work freely on Animals.txt. She can:
    1. View Revision 2, with an elk instead of a raccoon, by re-syncing her working folder.
    2. Compare the two versions to determine what changed.
    3. Check out the file for editing. The source control system will automatically copy the latest version, Revision 2, to her working folder and lock the file in the repository.

Remember that source control terminology can vary a lot depending on the system. For example, some systems refer to the check-out operation as a “get,” and the check-in operation as an “add,” a “delta,” or a “commit.”

An important point in all this is that the repository only increments the revision number in response to the check-in operation. A user can check out a file, edit it and save it to his working folder but then decide to “revert” to the original version of the file. The source control system will release the file, and the revision number will remain unchanged. The user’s local changes are lost, unless he or she saved them elsewhere.

Concurrent editing

If working on the files in isolation is the safest route to controlling changes and avoiding conflicts, it’s also the slowest. For a more efficient workflow, modern source control systems allow for the possibility that two or more users will edit the same file at the same time. The notion of a check-out operation, with its attendant locking, disappears and instead each user performs an “update” to refresh his or her working folder with the most recent copies of the repository files. Users can then work on these files concurrently, then commit their changes to the repository.

If isolated editing is akin to SQL Server’s pessimistic concurrency model, then the concurrent editing model is more like SQL Server’s optimistic concurrency model. It hopes, optimistically, that no other user will “interfere” with a file on which another user is working, but it has to deal with the consequences if it happens.

Let’s see how this might alter our typical workflow. Our description of the various processes uses the most common terms associated with centralized version control systems, with the usual proviso that you will see differences even among centralized systems, and certainly for distributed systems where these processes work slightly differently.

  1. Fred performs an update to retrieve the latest files. In this case, he now has in his working folder Revision 2 of Animals.txt.
  2. Barb does likewise.
  3. Fred edits his working copy of the file by adding wolf.
  4. Barb edits her working copy of the file by adding fox.
  5. Fred saves the changed file to his working folder.
  6. Fred commits the changed file to the repository. The updated file becomes Revision 3 in the repository.
  7. Barb saves her edited copy to her working folder.
  8. Barb tries to commit to the repository, but the repository detects that the version of the file has changed since Barb’s last update. The commit fails.
  9. Barb performs an update, retrieving Revision 3 into her working folder, and must now “merge” the changes in her working copy of the file with those in the Revision 3 copy of the file. This merge process might be automated, manual, or a combination of both.
  10. Barb commits the merged file to the repository. The updated file is designated as Revision 4.

Figure 7 provides an overview of this process.

 Core Database Source Control Concepts

Figure 7: Concurrent editing of the Animals.txt file.

In this simple example, Fred and Barb each make changes that do not really conflict; the wolf and the fox can easily co-exist, at least within the confines of a text file. In such cases, the source control system will probably perform this sort of merge automatically, but in this case even merging the documents manually is a relatively painless process whereby Revision 4 simply contains both the users’ changes, as shown in Figure 8.

 Core Database Source Control Concepts

Figure 8: Merging two versions of a file to create a third version.

Again, this may not be exactly how a VCS implements a merge, but it gives a good idea of how it works conceptually. Some users don’t even consider this a merge operation, since it can occur automatically as part of the update operation. Some merges, however, are not quite so straightforward.

Dealing with conflicts during concurrent edits

A VCS can sometimes auto-merge changes made by different users to the same file. Sometimes, however, concurrent changes to the same version of a file will cause a real conflict, and to resolve it one of the users will need to perform a manual merge operation, within his or her working folder.

Let’s rewind to the stage of our Animals.txt example, where each user was working with Revision 2. Suppose that, in addition to adding wolf to his file, Fred changed bear to black bear, and committed the changes (creating Revision 3). At the same time, in addition to adding fox to her file, let’s assume Barb changed bear to brown bear. Now when Barb tries to commit her file, an actual conflict emerges, one that Barb must resolve, as shown in Figure 9.

 Core Database Source Control Concepts

Figure 9: Addressing conflicts when trying to merge files.

The source control system can’t merge the two file versions until Barb resolves the conflict between black bear and brown bear (the additions of wolf and fox still cause no problem).

When conflicts of this nature arise, someone must examine the comparison and determine which version of bear should win out. In this case, Barb decides to go with black bear.

It’s worth considering the risk associated with this merge process. Barb’s commit fails, so she can’t save her changes to the repository until she can successfully perform a merge. If something goes wrong with the merge operation, she risks losing her changes entirely. This might be a minor problem for small textual changes like these, but a big problem if she’s trying to merge in substantial and complex changes to application logic. This is why the source control mantra is: commit small changes often.

Merging in distributed source control systems

In distributed source control systems, each user’s client hosts a local repository as well as a working folder. In the distributed model, Barb commits to her local repository, saving her changes, and then pushes the updated file to the central repository, which would fail because Fred got there first. We won’t go into detail on what comes next, but it means we’d see different file versions and version numbers in the central repository from what we saw in our “centralized” example. The main point at this stage is simply that merging is “safer” on distributed source control systems because users can always commit their local changes to their local repository.

Another wise practice is: update often. The earlier and more often we update our working folders and commit our changes, the better the processes work for everyone and the easier it is to merge files and resolve conflicts.


The way in which we organize files within a repository depends on the team’s preferences, the project, and on the source control system itself. One common approach is to organize files by project, with each set of project files, whether related to database development, application development or something else, stored within the main project folder.

We organize the project folder itself according to the standards set for the organization. In many cases, we will store the files related to the main development effort in a common root subfolder, often named trunk, but sometimes referred to by other names, such as master. We then create subfolders in the trunk folder as required; for example, as we saw earlier in Figure 1, a subfolder for database development and another for application development, and so on.

However, relying exclusively on the trunk folder assumes that everyone on the team is working simultaneously on the project files associated with the main development effort. In reality, it is common that certain team members will wish to “spin off” from the main effort to work on separate, but related projects, often referred to as branches. For this reason, you’ll find that the main project folder will also contain a branches folder (or a folder with a comparable name), in addition to trunk.

Let’s assume that we’ve created a project folder in our repository for our AVM project, added the trunk and branches folders, and built our data-driven application. At this point, all our files are stored in the trunk folder.

At Revision 100, the team releases the latest version of the application to customers as v1.0. They’re now ready to begin developing v2.0. At the same time, customers are submitting feedback and bug reports for v1.0. This is a classic example in software development of where it will be useful to create a branch (of course, various other branching strategies are used too). We create a branch at a particular revision in the repository. When we do this, the repository creates a new path location, identified by whatever we call the branch.

In this case, we’ll create a branch at Revision 100 and call it 1.0_bug fixes. From the user’s perspective, this process creates a separate 1.0_bug fixes subfolder in the branches folder, at Revision 101, and populates it with project files that point back to the Revision 100 files in the trunk folder. Developers can work on the branch files as they would in the trunk, but their efforts remain independent of the trunk development efforts, while preserving the fact that each set of files shares the same roots.

Tags and build numbers

Closely related to the concept of a branch is a tag. In fact, a tag is virtually identical to a branch in that it is a named location for a set of files in the repository, created at a particular revision. We can think of creating a tag as a way to name a set or subset of files associated with a particular revision. The big difference is that we don’t ever modify tags. They represent a stable release of a product, with the tag usually representing some meaningful build number. In our example, we might create a tag at Revision 100, called “v1.0.”

While part of the team works on the 1.0 bug fixes in the branch, the rest of the team continues the 2.0 development in the trunk. Those assigned to the bug fixes can work with the trunk folders and files as they would in the trunk. They can edit the files in their working folders and commit changes to the repository, which will maintain revision histories for the life of each branch file, starting from the point of branch creation.

For example, let’s say Barb creates a branch of the AVM project, containing all the project files, called 1.0_bug fixes, in preparation for the first maintenance release (v1.1). This means that the branch folder is at Revision 101, and contains, among other files, the latest revision (let’s say Revision 100) of Animals.txt, as it existed in the trunk at the time she created the branch. Meanwhile, Fred is working in the trunk in preparation for the release of v2.0.

Fred updates his working folder with the latest file versions in the trunk. He retrieves Revision 100 of the Animals.txt file, adds chipmunk and walrus and deletes fox. He saves the file and commits his changes to the repository. This creates Revision 102 in the repository.

Barb updates her working folder with the latest file versions in the branch. She retrieves the latest version of the Animals.txt file in the branch, which is still Revision 100; Fred’s commit has no effect on the file versions in the branch. Barb changes black bear to bear and commits the change, creating Revision 103 in the repository, as shown in Figure 10.

 Core Database Source Control Concepts

Figure 10: Creating a branch for the Animals.txt file.

As you can see, at this point, we have two simultaneous but separate development efforts. Of course, at some point the team may want to merge changes made in the 1.0_bug fixes branch back into the trunk so the next full product release can benefit from the maintenance fixes.


Previously, we discussed the need to perform a merge operation to resolve conflicting changes to the same version of a file in the repository. Similarly, when we create a branch, we’ll at some point want to merge the changes made to the branch into the trunk. Conversely, we will also need to merge changes from the trunk into a branch. For example, this might be necessary so that a long-running “feature branch” can catch up with the changes in the trunk. As you can imagine, a merge in either direction could be a complex process, especially if conflicts arise across many project files.

The type of comparison made when merging between branches is similar to merging files during normal concurrent editing. In fact, within a particular source control system, the two processes might seem nearly identical. Conceptually, however, they are somewhat different in intent. When merging during concurrent editing of the same file, we’re trying to resolve all conflicts to produce a copy of the file that represents a single source of truth. When merging a branch to the trunk, or vice versa, we’re retaining two sources of the truth and we’re interested only in incorporating the changes from one into the other.

For example, suppose Barb wishes to merge into the trunk the changes she made in the branch. Barb’s latest commit to the repository was Revision 103 (+bear, −black bear), affecting the Animals.txt file in the branch. Meanwhile, Fred’s latest commit to the repository was Revision 102 (+chipmunk, +walrus, −fox), affecting the Animals.txt file in trunk. The resulting merge operation will produce a new version of the Animals.txt file in the trunk (Revision 104), while the branch version will remain untouched at Revision 103.

Let’s consider each of these changes in turn, in the context of the merge operation.

 Core Database Source Control Concepts

Figure 11: Resolving conflicts in the Animals.txt file.

Fred’s additions of chipmunk and walrus present no problem because they affect only the file slated for v2.0 (the trunk) and have nothing to do with v1.1 (the branch). In other words, Revision 104 will still contain the chipmunk and walrus entries.

The next potential conflict is with the fox listing. Fred removed fox from the file in trunk, but Barb did not remove it from the version in her branch. You might think that in merging from branch to trunk, we’d add fox back in to the trunk version. However, in effect, this would simply overwrite Fred’s change and any effective source control solution will recognize the potential for this sort of problem and avoid it. It may help to think of the merge in terms of the branch change set, in this case all the changes in the current branch revision compared to the revision on creating the branch. Barb wishes to apply this change set (−black bear, +bear) to the trunk file. This means, in SVN at least, that it will retain all other entries as they currently exist in trunk, and so fox will still not exist in the merged version in the trunk. Another VCS might mark this as a possible conflict, and the fact that the branch version assumes that the fox entry exists, but it doesn’t exist in the trunk, is a potential problem.. Ideally, before performing the merge, Barb would have updated her working copy of trunk and merged from trunk to branch, or at least inspected the differences and raised any possible issues with the team before performing her merge in the opposite direction.

Merging works differently in different source control systems

The methods used to perform the branch comparisons and merges vary from product to product, as does the amount of manual input involved to ensure that the merge does not do more damage than good. Each product is different and each configuration within a product can vary.

Finally, we have the potential conflict between bear and black bear. Just because Barb changed it in the trunk does not automatically mean it should be changed in the new release. Even so, SVN does not treat this as a conflict, though another VCS might. Only if Fred had also changed the black bear listing would SVN treat it as a conflict. The result is that the merged file in the trunk will contain the entry bear. Again, the onus is on Barb to ensure that this change to the maintenance release will cause no problems when applied to v2.0.

Branching usually comes off without a hitch; we point to the files and folders in the trunk that we want to branch and within seconds, off we go, with a new branch. Merging is the tricky part. The risks and overhead associated with merging can sometimes keep organizations from taking advantage of these capabilities. They might branch, but they don’t merge, often resulting in duplicated development efforts and manual copy-and-paste operations.

On top of this, it’s arguable that some source control systems are better at merging than others. Git, for example, was designed to merge, whereas SVN has a reputation for inflicting its fair share of agony on those trying to merge files.

Ultimately, though, the ability to branch and merge is crucial to any organization that needs to expedite their projects and work on those projects in parallel. The mantra is to merge early and often. If you do so, then it’s not usually too painful.


We’ve discussed many of the primary benefits of a source control system, and taken an initial, largely conceptual, look at the components (repository and working folder) and processes (branching and merging) of such a system, which help us realize these benefits.

Application developers have been reaping these benefits for many years, but database developers are only just starting to catch up. It’s true that database development is different from application development and often database developers have to remind the rest of the development team of those differences, but that does not mean they cannot benefit from a proven system to store and manage files, track changes, work with different versions, and maintain a historical record of the file’s evolution.

Let’s block ads! (Why?)

SQL – Simple Talk

Outta control

 Outta control

Thanks Tommy

 Outta control

About Krisgo

I’m a mom, that has worn many different hats in this life; from scout leader, camp craft teacher, parents group president, colorguard coach, member of the community band, stay-at-home-mom to full time worker, I’ve done it all– almost! I still love learning new things, especially creating and cooking. Most of all I love to laugh! Thanks for visiting – come back soon icon smile Outta control

Let’s block ads! (Why?)

Deep Fried Bits

Document control practices in the age of HIPAA

Information is the lifeblood of any organization, but having access to too much — especially personally identifiable…

information — can cause problems if the proper document control practices aren’t in place.

It’s now more important than ever to keep information out of the wrong hands. The Identity Theft Resource Center reports that there were 781 data breaches in 2015 — the second-highest year on record since the ITRC began tracking breaches in 2005.

Lawyers, for example, are taking a closer look at the Health Insurance Portability and Accountability Act (HIPAA) privacy rule and their responsibility to keep client health information within their firms private. The fines for noncompliance are enormous, so it is imperative that everyone in firms understands their roles in document control practices.

The HIPAA privacy rule

HIPAA establishes national standards to protect individuals’ medical records and other personal health information. It applies to health plans, healthcare clearinghouses and healthcare providers that conduct certain electronic healthcare transactions. The overall objective is to protect the privacy of personal health information and set limits and parameters on the access and use of such information without patient authorization.

Law firms must make reasonable efforts to protect their clients’ information from anyone who doesn’t require access to that information to do their jobs. This action is called a pessimistic model for document management. Law firms have traditionally operated in an optimistic model for document management, which allows access to pretty much everything. However, the rise of the data breach has most definitely changed the game.

Data privacy: Not just a technology issue

Information protection is not just an IT issue, and data breaches should not be viewed simply as a breakdown in technological controls. Every department — indeed, every employee — has a part to play in the security of information under the HIPAA privacy rule. The reality is that all firms must acknowledge the enterprise-wide disruption that can occur when a data breach is discovered. The firms that prepare ahead of time will not only be able to withstand the data breach, but they can also safeguard their positive reputation for their clients, partners and employees. Making the choice to implement an information governance and security program is the first step toward data protection.

Not Just a Paper Issue

While the majority of data breaches involve electronic files, paper files are also susceptible. Of the data breaches reported by the Identity Theft Resource Center, about 30 of them were deemed to be a paper data breach. For example, one insurance company reported more than 5,000 records were exposed in March of last year. What was the nature of the breach? Eleven people were charged with identity theft and credit card fraud after an employee allegedly printed and shared screenshots of more than 5,000 subscriber profiles. Most of the other paper data breaches reported included vandalism or break-in charges. All of this is a result of poor document management.

Document control: Where to begin

Law firm records managers and information governance professionals should work to develop a program for how information is managed and then communicate the data protection requirements to all attorneys and staff personnel.

A point to remember is data protection applies to both physical and electronic records. Therefore, a proper chain of custody workflow must be part of the data protection requirements. Chain of custody helps organizations understand the who, what, when, where and why of a particular document.

For physical records, barcode tracking and RFID technology are the leading tools in this arena, while a document management system will fit the bill for electronic records since it is able to capture this type of metadata.

Of course, proper security permissions apply no matter if the information is paper or electronic. Physical records can be stored in locked cabinets and encrypted. Electronic records should utilize some sort of encryption software to prevent information falling into the wrong hands. There is no shortage of these types of tools on the market today. 

Let’s block ads! (Why?)

ECM, collaboration and search news and features

Internal Control – From Necessary Evil To Operational Excellence

IoT Isbel QA01 300x200 Internal Control – From Necessary Evil To Operational ExcellenceHyperconnectivity, the concept synonymous with the Internet of Things (IoT), is the emerging face of IT in which applications, machine-based sensors, and high-speed networks merge to create constantly updated streams of data. Hyperconnectivity can enable new business processes and services and help companies make better day-to-day decisions. In a recent survey by the Economist Intelligence Unit, 6 of 10 CIOs said that not being able to adapt for hyperconnectivity is a “grave risk” to
their business.

IoT Isbel QA02 300x200 Internal Control – From Necessary Evil To Operational ExcellenceIoT technologies are beginning to drive new competitive advantage by helping consumers manage their lives (Amazon Echo), save money (Ôasys water usage monitoring), and secure their homes (August Smart Lock). The IoT also has the potential to save lives. In healthcare, this means streaming data from patient monitoring devices to keep caregivers informed of critical indicators or preventing equipment failures in the ER. In manufacturing, the IoT helps drive down the cost of production through real-time alerts on the shop floor that indicate machine issues and automatically correct problems. That means lower costs for consumers.

Several experts from the IT world share their ideas on the challenges and opportunities in this rapidly expanding sector.

qa q Internal Control – From Necessary Evil To Operational ExcellenceWhere are the most exciting and viable opportunities right now for companies looking into IoT strategies to drive their business?

Mike Kavis: The best use case is optimizing manufacturing by knowing immediately what machines or parts need maintenance, which can improve quality and achieve faster time to market. Agriculture is all over this as well. Farms are looking at how they can collect information about the environment to optimize yield. Even insurance companies are getting more information about their customers and delivering custom solutions. Pricing is related to risk, and in the past that has been linked to demographics. If you are a teenager, you are automatically deemed a higher risk, but now providers can tap into usage data on how the vehicle is being driven and give you a lower rate if you present a lower risk. That can be a competitive advantage.

Dinesh Sharma: Let me give you an example from mining. If you have sensored power tools and you have a full real-time view of your assets, you can position them in the appropriate places. Wearable technology lets you know where the people who might need these tools are, which then enables more efficient use of your assets. The mine is more efficient, which means reduced costs, and that ultimately results in a margin advantage over your competition. Over time, the competitive advantage will build and there will be more money to invest in further digital transformation capabilities. Meanwhile, other mining companies that aren’t investing in these technologies fall further behind.

qa q Internal Control – From Necessary Evil To Operational ExcellenceWith the IoT, how should CIOs and other executives think and act differently?

Martha Heller: The points of connection between IT and the business should be as strategic and consultative as possible. For example, the folks from IT who work directly with R&D, marketing, and data scientists should be unencumbered with issues such as network reliability, help desk issues, and application support. Their job is to be a business leader and to focus on innovative ideas, not to worry for an instant about “Oh your e-mail isn’t working?” There’s also obviously the need for speed and agility. We’ve got to find a way to transform a business idea into something that the businessperson can touch and feel as quickly as possible.

Greg Kahn: Companies are realizing that they need to partner with others to move the IoT promise forward. It’s not feasible that one company can create an entire ecosystem on their own. After all, a consumer might own a Dell laptop, a Samsung TV, an Apple watch, a Nest device, an August Smart Lock, and a Whirlpool refrigerator.

It is highly unrealistic to think that consumers will exchange all of their electronic equipment and appliances for new “connected devices.” They are more likely to accept bridge solutions (such as what Amazon is offering with its Dash Replenishment Service and Echo) that supplement existing products. CIOs and other C-suite executives will need to embrace partnerships boldly and spend considerable time strategizing with like-minded individuals at other companies. They should also consider setting up internal venture arms or accelerators as a way to develop new solutions to challenges that the IoT will bring.

qa q Internal Control – From Necessary Evil To Operational ExcellenceWhat is the emerging technology strategy for effectively enabling the IoT?

Kavis: IT organizations are still torn between DIY cloud and public cloud, yet with the IoT and the petabytes of data being produced, it changes the thinking. Is it really economical to build this on your own when you can get the storage for pennies in the cloud? The IoT also requires a different architecture that is highly distributed, can process high volumes of data, and has high availability to manage real-time data streaming.

On-premise systems aren’t really made for these challenges, whereas the public cloud is built for autoscaling. The hardest part is connecting all the sensors and securing them. Cloud providers, however, are bringing to market IoT platforms that connect the sensors to the cloud infrastructure, so developers can start creating business logic and applications on top of the data. Vendors are taking care of the IT plumbing of getting data into the systems and handling all that complexity so the CIO doesn’t need to be the expert.

Kahn: All organizations, regardless of whether they outsource data storage and analysis or keep it in house, need to be ready for the influx of information that’s going to be generated by IoT devices. It is an order of magnitude greater than what we see today. Those that can quickly leverage that data to improve operational efficiency, and consumer engagement will win.

Sharma: The future is going to be characterized by machine interactions with core business systems instead of by human interactions. Having a platform that understands what’s going on inside a store – the traffic near certain products together with point-of-sale data – means we can observe when there’s been a lot of traffic but the product’s just not selling. Or if we can see that certain products are selling well, we can feed that data directly into our supply chain. So without any human interaction, when we start to see changes in buying behavior we can update our predictive models. And if we see traffic increasing in another part of the store in a similar pattern we can refine the algorithm. We can automatically increase supply of the product that’s in the other part of the store. The concept of a core system that runs your process and workflow for your business but is hyperconnected will be essential in the future.

qa q Internal Control – From Necessary Evil To Operational ExcellencePrivacy and security are a few of the top concerns with hyperconnectivity. Are there any useful approaches yet?

IoT Isbel QA03 Internal Control – From Necessary Evil To Operational ExcellenceKavis: We have a lot less control over what is coming into companies from all these devices, which is creating many more openings for hackers to get inside an organization. There will be specialized security platforms and services to address this, and hardware companies are putting security on sensors in the field. The IoT offers great opportunities for security experts wanting to specialize in this area.

Kahn: The privacy and security issues are not going to be solved anytime soon. Firms will have to learn how to continually develop new defense mechanisms to thwart cyber threats. We’ve seen that play out in the United States. In the past two years, data breaches have occurred at both brick-and-mortar and online retailers. The brick-and-mortar retail industry responded with a new encryption device: the chip card payment reader. I believe it will become a cost of business going forward to continually create new encryption capabilities. I have two immediate suggestions for companies: (1) develop multifactor authentication to limit the threat of cyber attacks, and (2) put protocols in place whereby you can shut down portions of systems quickly if breaches do occur, thereby protecting as much data as possible.

Polly Traylor is a freelance writer who reports frequently about business and technology.

download 128 008cba Internal Control – From Necessary Evil To Operational ExcellenceDownload the PDF (867 KB)


Let’s block ads! (Why?)

Digitalist Magazine

Google’s Voice Access app lets you control Android devices by speaking

Google today announced the beta launch of Voice Access, an app that will let people use speech recognition to control Android devices.

While anyone will presumably be able to use it, it’s designed with specific groups of people in mind — specifically “people who have difficulty manipulating a touch screen due to paralysis, tremor, temporary injury or other reasons,” Eve Andersson, manager of accessibility engineering at Google, wrote in a blog post.

“For example, you can say ‘open Chrome’ or ‘go home’ to navigate around the phone, or interact with the screen by saying ‘click next’ or
‘scroll down,’” Andersson wrote.

Google Voice Access Google’s Voice Access app lets you control Android devices by speaking

Above: Google’s Voice Access app.

Image Credit: Google

In launching Voice Access, Google is the latest company in the past few weeks to emphasize what it’s doing in the area of accessibility. Twitter started letting people submit captions for images they tweet out. Facebook enhanced the screen reader for iOS with automatically generated spoken image captions. Microsoft talked about the Seeing AI app at its Build developer conference. And Apple released videos showing how its iPad tablet helps an autistic person communicate with others.

The interesting thing to point out is how much Google has improved its speech-recognition technology, which draws on artificial intelligence. It’s deployed on many millions of Android devices, and last year Google said that its recognition error rate for Google Voice voicemail transcription had gone down by 50 percent.

Google’s Voice Access app now has “enough testers,” according to the link Google provided to try it out. Look for Google to launch the app out of beta in the future.

Google’s innovative search technologies connect millions of people around the world with information every day. Founded in 1998 by Stanford Ph.D. students Larry Page and Sergey Brin, Google today is a top web property in all major glob… read more »

VB Profile Logo Google’s Voice Access app lets you control Android devices by speakingNew! Track Google’s Landscape to stay on top of the industry in 3 minutes a day. Understand the entire ecosystem, monitor innovation, and track deal flows. Learn more.

Get more stories like this:  twitter Google’s Voice Access app lets you control Android devices by speaking  facebook Google’s Voice Access app lets you control Android devices by speaking

Let’s block ads! (Why?)

Big Data – VentureBeat