Tag Archives: Chief

The CMO + Chief Customer Officer: The Beyonce & Jay-Z of the C-Suite

20180109 bnr man woman fistbump 351x200 The CMO + Chief Customer Officer: The Beyonce & Jay Z of the C Suite

Like any relationship, however, the CMO-CCO partnership may take some work in the beginning to create a cohesive, fruitful bond.

The starting point should be recognizing that both executives share big goals — increasing revenue, delivering better products and engendering more profitable and longer-lasting customers. Then it’s a matter of zeroing in on the synergies to meet it.

For any CMOs still having doubts about that new chief customer officer down the hall, here are three things to remember:

Chief Customer Officer is a Natural Partner

The CMO’s role has evolved in recent years thanks to new technologies that equip marketing organizations with the data to better understand customer behavior and create a seamless digital experience across the customer lifecycle. This example of “knowledge is power” is a reason that many chief marketing officers and heads of sales have worked more collaboratively in the pre-sales environment. The same should hold true for CMOs and CCOs post sale.

For example, CMOs have championed the use of marketing automation technology to track and measure a user’s behavior on a website or what they did with an email and choose responses based on what the data shows. The CMO can help the CCO use the same techniques to market to customers with the same intelligence and creativity as to prospects.

The CMO also should embrace working with the CCO to take the same marketing tone, style and voice that the company has carefully cultivated and extended to customer retention and growth efforts.

  • Are messages and activities aligned?
  • Do customers receive, say, a newsletter that seems disjointed from the company’s other marketing?

The CMO can help the chief customer officer tell a powerful story and encourage a brand crush among prospects and customers alike. CCOs, meanwhile, can aid the CMO in better understanding customers’ needs and challenges so he or she can create digital experiences that truly reflect the marketplace.

Having a CCO Is Good for the CMO

The CMO of yesterday focused squarely on feeding leads to sales. The modern CMO is charged with a broader range of responsibilities in not only generating leads but in growing brand awareness, impacting the bottom line, retaining customers and turning them into the company’s biggest advocates. To remain vital as a CMO in this new model, you need relevancy in the post-sales environment. You need to be asking yourself: How do I partner with this CCO who will need to simultaneously personalize and automate more customer interactions?

CMOs should look at the CCO’s arrival as a tremendous opportunity. Here is a C-level executive who will partner with you on delivering on the brand promise!

Think of Yourself as a Pioneer

Despite all the buzz about the rise of the CCO, only 22 percent of the Fortune 100 had one in 2014, according to a CCO Council survey. And in day-to-day practice, many businesses still market far more in pre-sales than in post.

In some cases, companies think of customer marketing as programs that leverage the voice of the customer to influence purchase behaviors — including customer references and case studies, rather than as a strategic, coordinated set of activities designed to help customers realize the value of what they’ve purchased and compel them to want to be brand advocates. By locking arms with the CCO, CMOs can be at the forefront of a cool new approach to driving revenue and customer loyalty.

So, you see, the CCO is far from a threat to the CMO but an amazing opportunity.

Let’s block ads! (Why?)

Act-On Blog

Equifax announces Chief Security Officer and Chief Information Officer have left

 Equifax announces Chief Security Officer and Chief Information Officer have left

(Reuters) — Equifax said on Friday that it made changes in its top management as part of its review of a massive data breach, with two technology and security executives leaving the company “effective immediately.”

The credit-monitoring company announced the changes in a press release that gave its most detailed public response to date of the discovery of the data breach on July 29 and the actions it has since taken.

The statement came on a day when Equifax’s share price continued to slide following a week of relentless criticism over its response to the data breach,

Lawmakers, regulators and consumers have complained that Equifax’s response to the breach, which exposed sensitive data like Social Security numbers of up to 143 million people, had been slow, inadequate and confusing.

Equifax on Friday said that Susan Mauldin, chief security officer, and David Webb, chief information officer, were retiring.

The company named Mark Rohrwasser as interim chief information office and Russ Ayres as interim chief security officer, saying in its statement, “The personnel changes are effective immediately.”

Rohrwasser has led the company’s international IT operations, and Ayres was a vice president in the IT organization.

The company also confirmed that Mandiant, the threat intelligence arm of the cyber firm FireEye, has been brought on to help investigate the breach. It said Mandiant was brought in on Aug. 2 after Equifax’s security team initially observed “suspicious network traffic” on July 29.

The company has hired public relations companies DJE Holdings and McGinn and Company to manage its response to the hack, PR Week reported. Equifax and the two PR firms declined to comment on the report.

Equifax’s share prices has fallen by more than a third since the company disclosed the hack on Sept. 7. Shares shed 3.8 percent on Friday to close at $ 92.98.

U.S. Senator Elizabeth Warren, who has built a reputation as a fierce consumer champion, kicked off a new round of attacks on Equifax on Friday by introducing a bill along with 11 other senators to allow consumers to freeze their credit for free. A credit freeze prevents thieves from applying for a loan using another person’s information.

Warren also signaled in a letter to the Consumer Financial Protection Bureau, the agency she helped create in the wake of the 2007-2009 financial crisis, that it may require extra powers to ensure closer federal oversight of credit reporting agencies.

Warren also wrote letters to Equifax and rival credit monitoring agencies TransUnion and Experian, federal regulators and the Government Accountability Office to see if new federal legislation was needed to protect consumers.

Connecticut Attorney General George Jepsen and more than 30 others in a state group investigating the breach acknowledged that Equifax has agreed to give free credit monitoring to hack victims but pressed the company to stop collecting any money to monitor or freeze credit.

“Selling a fee-based product that competes with Equifax’s own free offer of credit monitoring services to victims of Equifax’s own data breach is unfair,” Jepsen said.

Also on Friday, the chairman and ranking member of the Senate subcommittee on Social Security urged Social Security Administration to consider nullifying its contract with Equifax and consider making the company ineligible for future government contracts.

The two senators, Republican Bill Cassidy and Democrat Sherrod Brown, said they were concerned that personal information maintained by the Social Security Administration may also be at risk because the agency worked with Equifax to build its E-Authentication security platform.

Equifax has reported that for 2016, state and federal governments accounted for 5 percent of its total revenue of $ 3.1 billion.

400,000 Britons affected

Equifax, which disclosed the breach more than a month after it learned of it on July 29, said at the time that thieves may have stolen the personal information of 143 million Americans in one of the largest hacks ever.

The problem is not restricted to the United States.

Equifax said on Friday that data on up to 400,000 Britons was stolen in the hack because it was stored in the United States. The data included names, email addresses and telephone numbers but not street addresses or financial data, Equifax said.

Canada’s privacy commissioner said on Friday that it has launched an investigation into the data breach. Equifax is still working to determine the number of Canadians affected, the Office of the Privacy Commissioner of Canada said in a statement.

Let’s block ads! (Why?)

Big Data – VentureBeat

AI – What Chief Compliance Officers Care About

AI Technology Compliance AI – What Chief Compliance Officers Care About

Arguably, there are more financial institutions located in the New York metropolitan area than anywhere else on the planet, so it was only fitting for a conference on AI, Technology Innovation & Compliance to be held in NYC – at the storied Princeton Club, no less. A few weeks ago I had the pleasure of speaking at this one-day conference, and found the attendees’ receptivity to artificial intelligence (AI), and creativity in applying it, to be inspiring and energizing. Here’s what I learned.

CCOs Want AI Choices

As you might expect, the Chief Compliance Officers (CCOs) attending the AI conference were extremely interested in applying artificial intelligence to their business, whether in the form of machine learning models, natural language processing or robotic process automation – or all three. These CCOs already had a good understanding of AI in the context of compliance, knowing that:

  • Working the sets of rules will not find “unknown unknowns”
  • They should take a risk-based approach in determining where and how to divert resources to AI-based methods in order to find the big breakthroughs.

All understood the importance of data, and how getting the data you need to provide to the AI system is job number one. Otherwise, it’s “garbage in, garbage out.” I also discussed how to provide governance around the single source of data, the importance of regular updating, and how to ensure permissible use and quality.

AI Should Explain Itself

Explainable AI (XAI) is a big topic of interest to me, and among the CCOs at the conference, there was an appreciation that AI needs to be explainable, particularly in the context of compliance with GDPR. The audience also recognized that their organizations need to layer in the right governance processes around model development, deployment, and monitoring––key steps in the journey toward XAI. I reviewed the current state of art of Explainable AI methods, and where their road leads to getting AI that is more grey-boxed.

Ethics and Safety Matter

In pretty much every AI conversation I have, ethics are the subject of lively discussion. The New York AI conference was no exception. The panel members and I talked about how any given AI system is not inherently ‘ethical’; it learns from the inputs it’s given. The modelers who build the AI system need to not pass sensitive data fields, and those same modelers need to examine if inadvertent biases are derived from the inputs in the training of the machine learning model.

Here, I was glad to be able to share some of the organizational learning FICO has accumulated over decades of work in developing analytic models for the FICO® Score, our fraud, anti-money laundering (AML) products and many others.

AI safety was another hot topic. I shared that although models will make mistakes and there needs to be a risk-based approach, machines are often better than human decision-making, such as autopilots on airplanes. Humans need to be there to step in if something is changing, to the degree that the AI system may not make an optimal decision. This could arise as a change in environment or data character.

In the end, an AI system will work with the data on which it has trained, and is trained to find patterns in it, but the model itself is not necessarily curious; the model is still constrained by the algorithm development, data posed in the problem, and the data it trains on.

Open Source Is Risky

Finally, the panel and I talked about AI software and development practices, including the risks of open source software and open source development platforms. I indicated that I am not a fan of open source, as it often leads to scientists using algorithms incorrectly, or relying on someone else’s implementation. Building an AI implementation from scratch, or from an open source development platform, gives data scientists more hands-on control over the quality of the algorithms, assumptions, and ultimately the AI model’s success in use.

I am honored to have been invited to participate in Compliance Week’s AI Innovation in Compliance conference. Catch me at my upcoming speaking events in the next month: The University of Edinburgh Credit Scoring and Credit Control XV Conference on August 30-September 1, and the Naval Air Systems Command Data Challenge Summit.

In between speaking gigs I’m leading FICO’s 100-strong analytics and AI development team, and commenting on Twitter @ScottZoldi. Follow me, thanks!

Let’s block ads! (Why?)

FICO

The Pardonizer in Chief

 The Pardonizer in Chief
© Tom Tomorrow

I suspect that sometime in the near future, Donald Trump is going to try to pardon someone he shouldn’t, and all hell will break loose. My only question is whether when that happens, will the Republicans stand up to Trump, or will they roll over?

 The Pardonizer in Chief

Let’s block ads! (Why?)

Political Irony

Expert Interview (Part 1): Reynold Xin, Databricks Chief Architect and Founder, on Driving Forces Behind Major Changes to Spark 2.x

The major version release of Spark has been getting a lot of attention in the Big Data community. At the last Strata + Hadoop World in San Jose, Syncsort’s Big Data Product Manager, Paige Roberts sat down with Reynold Xin (@rxin) of Databricks to get the details on the driving factors behind Spark 2.x and its newest features.

Reynold Xin is the Chief Architect for Spark core at Databricks and one of Spark’s founding fathers. He had just finished giving a presentation on the full history of Spark, from taking inspiration from mainframe databases to the cutting edge features of Spark 2.x.

Paige Roberts: First, let’s go ahead and have you introduce yourself.

Reynold Xin: Okay, sounds good. My name is Reynold Xin. I’m one of the co-founders of Databricks. I’m also chief architect and have been leading the development of Spark with the company for the past couple of years.

Before Databricks, I was working on Spark as part of my graduate school studies at UC Berkeley. So, I’ve been working on Spark for a while.

Roberts: How long ago was that?

Xin: I started around 2011, so more than five years. I’ve been behind most of the major efforts of the past couple of years.

blog banner accessing integrating app data Expert Interview (Part 1): Reynold Xin, Databricks Chief Architect and Founder, on Driving Forces Behind Major Changes to Spark 2.x

Roberts: We just came from your talk where you started the Spark history with IMS databases on mainframes, and you ended up at Spark 2.0. That was quite a journey.

Xin: Yes, absolutely.

I don’t want to make you re-do your whole talk, so let’s focus on the main changes between Spark 1.6 and 2.0.

It’s a big version bump. It’s the first major release of Spark other than the initial Spark 1.0. We focused primarily on three aspects. One is performance optimization.

We started rolling out the DataFrame API in Spark 1.3, I think, which lays down the foundation. Because it’s higher level API, now we have more room to do performance optimizations. If the user gives us a specific function, and we have to run it, there’s not much we can do.

With Spark 2.0 we were able to, in many cases, improve performance anywhere from 2x to around 100x. Through a couple projects, mostly project Tungsten.

Right. I did a blog post on Tungsten last year. Very cool project.

Blog 3 23 Expert Interview (Part 1): Reynold Xin, Databricks Chief Architect and Founder, on Driving Forces Behind Major Changes to Spark 2.x

So, that’s the first focus area, performance. The second one is Structured Streaming.

We have heard from a lot of our customers that new requirements are surfacing in building real-time continuous applications. These applications have to make decisions nonstop on usually a live stream of data. And often these types of applications also have to combine with batch applications.

We started thinking about how we can actually build a new streaming engine and APIs that are suitable for these kinds of applications. Our end result was actually pretty simple. We just took the DataFrame API – there’s no change to it – added a couple small extensions, and then the users could express that streaming computational logic exactly how they would express the batch part. So, that is the second part, Structured Streaming.

Nice! Batch and streaming together, without having to learn a new API.

Last but not least, a lot more work is being done in SQL.

So, Spark 2.0 has become the most SQL 2003 standard-compliant open source Big Data query engine. We added window functions, sub-queries, and 2.0 can run every single one of the 99 TCP-DS queries, which is a standard benchmark, without modifying the queries. As far as I know, none of the other engines can do that.

This makes it much easier for business analysts to run their existing work, and port their existing business applications over to Spark SQL.

So Databricks and the Spark community have put a lot of emphasis on usability and performance. That makes a lot of sense.

In part 2 of this interview, Reynold Xin will give us some good information on the differences between stream and Structured Streaming, how to integrate Structured Streaming with Apache Kafka, and some hints about the future of Spark.

Download Syncsort’s latest white paper, “Accessing and Integrating Mainframe Application Data with Hadoop and Spark,” to learn about the architecture and technical capabilities that make Syncsort DMX-h the best solution for accessing the most complex application data from mainframes and integrating that data using Hadoop.

 

Let’s block ads! (Why?)

Syncsort blog

Twitter’s Gnip chief Chris Moody is joining Foundry Group

 Twitter’s Gnip chief Chris Moody is joining Foundry Group

Twitter is losing another key executive as Gnip chief executive Chris Moody has announced his departure. He has joined venture capital firm Foundry Group as a partner, a move Moody described as a “once-in-a-lifetime opportunity.” It’s believed that his last day will be at the end of May.

For more than two decades, Moody has been involved in the enterprise either as an executive or consultant. He’s worked at Oracle, IBM, and Aquent before joining Gnip as chief operating officer in 2011 and then assuming the role of CEO at the big data platform in 2013, leading up to its acquisition by Twitter in 2014. Since then, he’s served as a vice president and general manager of the company’s data and enterprise solutions.

Moody’s relationship with Foundry Group isn’t new, as both he and the firm are from the Boulder, Colorado area, and the firm had been an investor in Gnip. When it came time for Foundry to raise its next fund last September, the partners decided to begin having conversations with Moody.

“We knew Chris was an extraordinary board member as well as an extremely seasoned CEO. We had a great affinity for each other, and he shared our value system. When the five of us sat around talking about Chris, after each conversation we got more excited about having him join us, especially as we learned about his personal view for the next decade of his life,” wrote Brad Feld, managing director for Foundry Group.

The departure of Moody strikes another blow to Twitter’s developer relations, especially among brands using the service’s feed of data. But it’s likely that the company already has a backup in place, although a name was not immediately known. We’ve reached out to Twitter for additional information. His resignation joins others in the developer advocacy and platform side who have recently left, including developer advocacy lead Bear Douglas, senior developer advocate Romain Huet, head of developer relations Jeff Sandquist, and senior director of developer and platform relations Prashant Sridharan.

And let’s not also forget about the other executives that have also departed since 2016, such as COO Adam Bain, chief technology officer Adam Messinger, vice president of communications Natalie Kerris, vice president of product Josh McFarland, Vine general manager Jason Toff, and vice president of global media Katie Stanton.

Moody’s move to venture capital could be nothing more than a sign that he wanted to become an investor. But what will Twitter do now to maintain its relationship with brands eager to tap into the service’s firehose of data?

Although no specific investment themes were stated, it’s possible that Moody could focus on the enterprise and finding the next big data platform that could make a big impact in the marketplace.

Let’s block ads! (Why?)

Big Data – VentureBeat

Expert Interview: Doug Cutting, Cloudera Chief Architect and Hadoop Co-Founder, Part Two

In Part 1 of this interview with Cloudera’s Chief Architect, Doug Cutting talked about how he got started in Big Data software, Cloudera’s role in recognizing the importance of Hadoop for businesses, what trends drove Hadoop’s growth, and what broad-based business successes Hadoop is now driving in Big Data.

In Part 2, Doug discusses with Syncsort’s Paige Roberts what he is working on now, the launch of Apache Spot, and how to help organizations stay on track with open source, both on-premise and in the cloud.

blog open source Expert Interview: Doug Cutting, Cloudera Chief Architect and Hadoop Co Founder, Part Two

Syncsort’s own Paige Roberts sits down one-on-one for a candid discussion with open-source guru and Hadoop creator, Doug Cutting of Cloudera

Paige: What are you working on right now?

Doug: A number of different things. I spend a significant chunk of my time out on the road communicating with folks, trying to spell out this vision of where things are going. That’s probably close to a third of my time is spent doing that.

I also still do a little bit of development. I try to help out where needed in engineering and bringing people up to speed on things that I still may know better than other people. So, I’ve been doing some of that lately.

Also, I’m formally part of what we call the strategy office. There’s three of us in Cloudera. We’re kind of a skunk works in some ways. We’re trying to solve problems and set a pattern for how Cloudera should be solving problems.

So, one of the things we worked on recently was, how do we help non-profits? What’s Cloudera’s model going to be for how we can assist people that we think deserve access to data tools but probably can’t afford to pay us? So I worked with Thorn, who was a winner here last year, and came up with a pattern that I think we can repeat again and again with other non-profits for how we can provide them with assistance.

Paige: That’s awesome!

Doug: That was kind of a fun project. Another one you’re going to hear more about tomorrow [at Strata] is cyber security.

Paige: I heard about that. Apache Spot?

Doug: Yes, we’re launching this as Apache Spot. It’s a new project. And the exciting part of that for me is we’re trying to develop some open data models. So, for cyber and for network data and data about users. So, you’ve got this sort of software stack, which everybody shares. But then, what happens is that each application ends up having its own schemas for the data. So they can have trouble sharing anything above the software layer. We can develop some common schemas for different industries and for different verticals, starting with cyber.

Paige: Some standards.

Doug: Some standards. Say, these are the formats. Say, if you’re going to put it in HBase, this is the way you ought to do it, and here are some tools to help you do that. So we can actually have some open source projects which implement these standards.

It’s not just a document. There’s actually some code that helps you glue things together. Then we can get a lot of different vendors with different kinds of applications. They can share the data, so you don’t have to have multiple copies of your network data for different purposes. So, I think we need to do this together. I think that, similarly, we need this in healthcare. We ought to have standard formats within the ecosystem. I mean there are some.

Paige: Yeah, like EDI. It’s a standard, but there’s like a million different ways to implement it.

Doug: But they’re also not formats that are friendly to the Hadoop ecosystem. So, how do we translate these into the Hadoop ecosystem, genetic data and so on? And there are some efforts in some of these areas already. But I think that’s a neat area for Cloudera to work on.

We don’t want to go into actually building vertical applications and vertical solutions. We want a platform. But we also need to help enable the platform to be effective in different verticals. So we’re starting to look at data formats that are specific to industries. I think it’s a good direction for us. That’s the part of the cyber thing that I particularly am interested in.

Hadoop Webcast Dell Cloudera Syncsort Digital Transformation 12082016 Expert Interview: Doug Cutting, Cloudera Chief Architect and Hadoop Co Founder, Part Two

Paige: Can you tell me more about Apache Spot?

Doug: We’ve been working with Intel for a couple years on this project – ONI [Open Network Insight] is what they were calling it. And this is just taking that into Apache. It’s been open source actually all along. Intel had it on GitHub. It has some data formats in it, but its primary focus has been on some analytics to help you identify threats, and that’s great stuff.

We want to keep working with Intel on developing that further. But we also want to really focus on getting a broader set of relevant schemas and data formats for data that can be used for other kinds of analytics in cyber and other kinds of predictions. Emphasizing that side of it more is what we’re hoping to do with Spot going forward.

The first thing we want to do is bring it to Apache so we have it some place that’s easy for lots of people to get involved in and collaborate. If it’s going to be a standard then Apache’s the right place to have it.

Paige: That makes sense. Yeah. I talked to a friend of mine, Ryan Merriman. He works as one of the architects on Apache Metron. I was wondering what’s the relationship between the two? How are they different? Is there a relationship?

Doug: Not really. Metron came out a while ago. I think they’re similar in a lot of ways. They came out almost the same time – almost the same month. They’re sort of two parallel efforts.

We’ve been collaborating with Intel from the beginning on ONI, so that’s the one that we’re comfortable with. We have a set of six or eight partners that we’ve been working with who are using ONI already – building solutions for our joint customers.

Paige: Ah, you already have it in production, and it’s working.

Doug: It’s one of these cases where it’s unfortunate that there are two convenient things. On the other hand, it’s you know –

Paige: It’s open source. It happens.

Doug: And it could turn out to be a good thing in terms of the evolutionary context. You want to have some competition.

Paige: Well, is there anything exciting coming down the road that you want to talk about?

Doug: There are lots of exciting things coming down the pipeline that I have no idea about, I’m sure.

Paige: [laughing] Okay.

Doug: The other thing that Cloudera has been working a lot on – and you’ll be hearing a lot about this week at Strata – is: we think that people are really starting to move to the public cloud in a big way.

There’s a lot of people staying on their on-premises data centers, but more and more we’re seeing people move to public cloud. We’re trying to see how we can make this open source Big Data ecosystem really work well in the cloud, too. And make that a first-class citizen, and figure out what we need to do to make it a very natural place. And make it easy for people to go back and forth. We’ve got a lot of announcements around that – making that really seamless.

So that’s an exciting thing, and I think it’s important. People love open source because they’re free from a lot of vendor lock-in. That’s where a lot of the attraction is. That’s one of the big reasons why people use open source. Yet, when they go use a cloud vendor like Amazon, they are immediately using the proprietary services that Amazon provides, and that no one else has, and they’re totally locked in to Amazon. They’re sort of destroying all that…

Paige: Open source goodness.

Doug: Yeah. They’re using open source software on Amazon. They’re not using it exclusively though. They’re also using a lot of Amazon’s services which locks them in. So, we’re trying to help people stay on an open source stack for these high-level services, and be able to run them on Amazon and Azure and Google Cloud and also on-premises. So that’s it.

Paige: Well, that’s great. Thank you so much.

Doug: My pleasure.

To hear how organizations can stay on track with their digital transformation, join Cloudera, Dell EMC and Syncsort industry experts on Thursday, December 8, for the webinar, “The Path to Digital Transformation.” to discuss why bigger data equates to bigger opportunities. They will address how best to begin a big data journey by taking control of all data, controlling costs, and identifying the first use case,  so organizations can move forward with confidence to transform their business.

Let’s block ads! (Why?)

Syncsort blog

Expert Interview: Doug Cutting, Cloudera Chief Architect and Hadoop Co-Founder: Part One

Every year, around Strata + Hadoop World, Cloudera hosts the Data Impact Awards, an award ceremony to congratulate customers for their most impressive or impactful implementations. Partners are invited to nominate customers, and industry analysts and experts judge the nominees. Everyone gets together to hear the stories of how Hadoop has changed businesses and lives and applaud the efforts of dedicated developers. During the celebration, Syncsort’s Paige Roberts dragged Cloudera’s Doug Cutting off to a quiet spot to chat.

blog open source Expert Interview: Doug Cutting, Cloudera Chief Architect and Hadoop Co Founder: Part One

Syncsort’s own Paige Roberts sits down one-on-one for a candid discussion with open-source guru and Hadoop creator, Doug Cutting of Cloudera.

Paige: You’re fairly famous now, but how did you get started in all this?

Doug: I spent a number of years in the software business in Silicon Valley. First, I was working in research at Xerox PARC, then at Apple, and a company called Excite in the 90’s. So I was always building search engines. I had worked on search technologies for a long time. I was an experienced software developer who was also working on problems that involved lots of data, trying to build scalable solutions that weren’t amenable to using a relational database. So that was my technical background.

In the late 90’s I wrote a search engine on my own time, in Java, called Lucene. Then, in 2000, I released it as open source. At that point, I learned the ability of open source to make a technology into a standard. Lucene really took off. It was good technology but also it was this delivery method of open source, this building a community at Apache, that really was predominantly responsible for its…

Paige: Success?

Doug: … dominating success.

Paige: Yeah.

Doug: A couple years later, trying to build a distributed version of Lucene that can crawl the internet, we came across Google’s papers about MapReduce and distributed files systems. We realize that these are the right techniques. No open source version of them exists.

This would be a useful technology: great potential for another open source project, a general utility that a lot of people could share if it were available as open source. So, Mike Cafarella and I got together and worked on that for a few years. By 2005, we had something up and running. We managed to rope Yahoo into devoting a big team to getting it to be scalable and really fulfill this promise of delivering an open source solution for scalable computing which we called Hadoop.

Paige: I just interviewed Owen O’Malley recently.

Doug: Right. He was a key part of that team that took what Mike and I had written, and got it to the point where it could really be used by anyone. The reason I was the guy who was able to get this is, I had a combination of technical experience with building scalable systems that weren’t relational databases, as well as experience with open source. I recognized that the combination would be really useful for a lot of things.

I didn’t realize just how useful it would be. That took the founders at Cloudera. I was not a founder. They really thought this would be useful to lots of other companies, like those we’re seeing here today.

Paige: At the awards?

Doug: At the awards. In banking, in transportation, and agriculture – all these crazy sectors are now finding these technologies useful. The founders of Cloudera were the first people to realize that there was a great opportunity for that. They started the company in I guess it was ’08, and I joined in ’09. It’s really been phenomenal to see this growth.

Paige: It really would have been hard to predict that explosion of growth. It’s not something you could have seen coming.

Doug: In retrospect, I think that what is now called the digital transformation that almost every industry is going through, was predictable. There were people predicting that.

As you know, Moore’s Law gives us cheaper and cheaper hardware. People are using it in more and more places, and it’s a byproduct to get data. That data can improve your business because you can use it to understand what you’re doing, and then you can optimize how you are doing things and improve the quality. It’s a really great, great thing to have. And I think you could have seen that, that data was going to become such a key asset to businesses across industries.

Put that together with these Big Data tools. The existing enterprise software universe wasn’t going to satisfy those needs, for a variety of reasons. For one reason, the hardware and software were way too expensive and too specialized for specific tasks, which weren’t the tasks that people needed to solve. People needed lower-cost solutions. They needed things that were more scalable and more general purpose. So it was really the right time for these technologies.

I think all the evidence was there. I simply didn’t put it together. But I think someone could have.

Paige: Well now you have some perspective on the trends from inside Cloudera and from your history. Where do you think it’s going from here?

Doug: I think we’re really seeing that most of the growth in industry is coming from these technologies. So I think we’re still in the early stages of industries becoming data intensive. I think we’re seeing this really driving growth and improvement and optimization and – what’s the word the economists use? Productivity.

We’re seeing some real advances in productivity. In ways, it was predicted that we see, from competition, improvements in productivity, a long time ago. Then some people were sometimes disappointed and say, “Oh. The paperless office is only slightly more productive.” I think people didn’t realize all the places that technology currently is used and touches. And we’re still only learning that. So that’s predominantly what we’re going to see.

And I think this open source ecosystem is really the appropriate way to build the technology. We don’t know what exact tools people are going to need. We need people to experiment – people at universities and in companies – to try building something that they think they need and see if other people need it. Then we’ve got this ecosystem that we can evolve the right tools. If several institutions find it useful, then it, you know…

Paige: Takes off?

Doug: Takes off and becomes a standard. We’re certainly seeing this again and again – rapid evolution for improving software that matches the needs of industries. And that’s pretty cool. [laughter]

Paige: Yes, it is!

Doug: So, in predicting where that’s going to lead to, what industries are going to be huge or what new technologies are going to drive them. I don’t think anybody can predict that. I think maybe with hindsight you can say, “Oh yeah this should have been obvious.” Like they’re doing about this thing.

Paige: [Laughter] Yup, crystal balls that look backwards are really clear.

Doug: But, I do think those are the trends that are going to be driving things: this generation of data at scale, and then the use of it to improve productivity.

Let’s block ads! (Why?)

Syncsort blog

Cows and data centers: What HP’s chief engineer thinks about fusing the real world with technology

Chandrakant Patel has a deep history working on hardware and fundamental science at Hewlett-Packard, and he has used that background to create a vision for the future of technology that combines the physical and digital worlds.

He hopes to inspire his fellow HP colleagues and the rest of the tech world on a new decades-long path. Patel is a senior fellow and the chief engineer at Hewlett-Packard. That’s an important and rare position, as HP has more than 50,000 employees in 170 countries, with many thousands of engineers. I met Patel at the 50th anniversary of HP Labs in Palo Alto, and we caught up for an interview after that event at a very special place for HP employees: the original garage at a home on Addison Street in Palo Alto, Calif., where HP was born in 1939.

Patel’s job is to inspire HP’s engineers to be creative when thinking about the big technology problems they must overcome. After all, Moore’s Law — or doubling the number of transistors on a chip every couple of years — doesn’t just happen. It is the result of a lot of smart people figuring out the toughest technical problems of the day. Patel believes that we still have to figure out a much more energy efficient world network, with intelligent devices at the edge that don’t drain resources out of the data centers.

We talked about why he chose to stay with the PC and printer maker, HP, rather than HP Enterprise, the services company, after last year’s split-up. And we reminisced about HP’s past, including the creation of its first computer 50 years ago this week. Patel is very passionate about how students should study the fundamentals of science — and both hardware and software — to prepare themselves for the age of the Internet of Things. He prefers to call this the “cyber physical” applications, which expose the seams between hardware and software, between the real world and the digital.

We chatted there so that we could get inspired about the history of technology and where it’s going in the future. Here’s an edited transcript of our conversation. I’ve also added many of Patel’s slides, as he loves to paint his ideas of the future by making sketches.

patel 2 800x417 Cows and data centers: What HP’s chief engineer thinks about fusing the real world with technology

Above: The future is cyber physical.

Image Credit: HP

Chandrakant Patel: I’m a mechanical engineer. I started at an interesting time in Silicon Valley. My first interview was with a company called Dysan. It was on Patrick Henry Drive. Patrick Henry was brand new. Now the stadium is very close to it. They were making disks. Heads and media were done here. I got a job at Memorex, where Nvidia is now located.

A long time ago, Memorex had that commercial – “Is it live or is it Memorex?” Ella Fitzgerald would shatter a glass with the frequency of her voice. They’d copy it to a Memorex tape, and then playing back the tape would shatter the glass too.

The reason it’s important to me is it was a prime time commercial. People understood why the glass broke. People understood physical fundamentals, back in the early ‘80s. I found myself in what I called the “valley of tinkerers.” Memorex had its share. Al Shugart, Finis Conner. They went on to create Seagate. We had manufacturing and design there.

I was making drives where the mass was 100 kilograms. A gigabyte would cost $ 100,000 and it was the size of a washing machine. Because the mass was very high and the stiffness was low, the frequency, the characteristic frequency of the drive was low. Low-frequency vibrations could damage it. As mechanical engineers we had interesting problems to solve.

VentureBeat: It was an age of physical hardware.

Patel: Very much so. Understanding how physical hardware worked. Discs were rotating at 3600 RPM, 32 heads, how do you keep them flying? Then one thing I noticed was, as the hardware got smaller, the drives got smaller, I felt the stiffness was going up. The mass went down. The ratio of stiffness to mass goes down and the natural frequency goes up. They’re less susceptible to those low-frequency vibrations. It was simple first-order fundamentals-based thinking to see that drives would be commoditized.

I reset myself, after working on large drives and small drives. I joined HP Labs in 1991 to work on the PA-RISC chip. We were going from wire bonds to the flip chip to get a lot more I/Os out of the chip. I established the chip packing and thermal management. I did a lot of work on electronics cooling. That’s when I got to know Bill. Subsequently I felt that chips would come from one or two places. As you scale down you need volume.

DSCN2460 800x600 Cows and data centers: What HP’s chief engineer thinks about fusing the real world with technology

Above: Chandrakant Patel is an HP senior fellow and chief engineer. He is standing at HP’s original garage headquarters.

Image Credit: Dean Takahashi

I moved out into systems, working on large-scale systems like Superdome, the supercomputer-class systems we were building. In the mid-’90s I went to my boss and said, “The data center is the computer. The building is the computer.” I filled a room with racks that I said would be about 10 kilowatts, filled with industry-standard components. Now the building is the value add, not the servers. If the networking, the cooling, the power—power, ping, and pipe. Those three Ps would determine the data center and the total cost of ownership of a data center is driven by energy.

My boss said, “Why do you want to work on facilities?” My contention was, it’s Carnegie Hall with 150 people per seat. A person is 100 watts. A rack would be 15 kilowatts. That’s 150 people in a seat. Imagine that. You have to deal with fluid flow and so on. We created the smart data center project. We build a data center with sensors and controls. We built the dynamic control systems for it. We were the first ones to do that.

We went on to build eco parts. That started because of a conversation with a customer of ours. The customer had underground mines. My recommendation was to put data centers in containers and lower them into the ground. The region where they were, the ground was nine degrees Celsius. I said, “Let’s dump heat into the ground.” That didn’t happen. I wish it had, but the dot-com boom and bust happened at that moment. Otherwise that would have been one of the most secure places in the world.

Let’s block ads! (Why?)

Big Data – VentureBeat

Chief Procurement Officers Continue To De-Emphasize Savings

Some moments are so instantly, indelibly etched into pop culture that they shape the way we think for years to come. For virtual reality (VR), that moment may have been the scene in the 1999 blockbuster The Matrix when the Keanu Reeves character Neo learns that his entire life has been a computer-generated simulation so fully realized that he could have lived it out never knowing that he was actually an inert body in an isolation tank. Ever since, that has set the benchmark for VR: as a digital experience that seems completely, convincingly real.

Today, no one is going to be unaware, Matrix-like, that they’re wearing an Oculus Rift or a Google Cardboard headset, but the virtual worlds already available to us are catching up to what we’ve imagined they could be at a startling rate. It’s been hard to miss all the Pokémon Go players bumping into one another on the street as they chased animated characters rendered in augmented reality (AR), which overlays and even blends digital artifacts seamlessly with the actual environment around us.

For all the justifiable hype about the exploding consumer market for VR and, to a lesser extent, AR, there’s surprisingly little discussion of their latent business value—and that’s a blind spot that companies and CIOs can’t afford to have. It hasn’t been that long since consumer demand for the iPhone and iPad forced companies, grumbling all the way, into finding business cases for them.

sap Q316 digital double feature1 images1 Chief Procurement Officers Continue To De Emphasize SavingsIf digitally enhanced reality generates even half as much consumer enthusiasm as smartphones and tablets, you can expect to see a new wave of consumerization of IT as employees who have embraced VR and AR at home insist on bringing it to the workplace. This wave of consumerization could have an even greater impact than the last one. Rather than risk being blindsided for a second time, organizations would be well advised to take a proactive approach and be ready with potential business uses for VR and AR technologies by the time they invade the enterprise.

They don’t have much time to get started.

The two technologies are already making inroads in fields as diverse as medicine, warehouse operations, and retail. And make no mistake: the possibilities are breathtaking. VR can bring human eyes to locations that are difficult, dangerous, or physically impossible for the human body, while AR can deliver vast amounts of contextual information and guidance at the precise time and place they’re needed.

As consumer adoption and acceptance drives down costs, enterprise use cases for VR and AR will blossom. In fact, these technologies could potentially revolutionize the way companies communicate, manage employees, and digitize and automate operations. Yet revolution is rarely bloodless. The impact will probably alter many aspects of the workplace that we currently take for granted, and we need to think through the implications of those changes.

VR and AR are related, but they’re not so much siblings as cousins. VR is immersive. It creates a fully realized digital environment that users experience through goggles or screens (and sometimes additional equipment that provides physical feedback) that make them feel like they’re surrounded by and interacting entirely within this created world.

AR, by contrast, is additive. It displays text or images in glasses, on a window or windshield, or inside a mirror, but the user is still aware of and interacting with reality. There is also an emerging hybrid called “mixed reality,” which is essentially AR with VR-quality digital elements, that superimposes holographic images on reality so convincingly that trying to touch them is the only way to be sure they aren’t actually there.

Although VR is a hot topic, especially in the consumer gaming world, AR has far more enterprise use cases, and several enterprise apps are already in production. In fact, industry analyst Digi-Capital forecasts that while VR companies will generate US$ 30 billion in revenue by 2020, AR companies will generate $ 120 billion, or four times as much.

Both numbers are enormous, especially given how new the VR/AR market is. As recently as 2014, it barely existed, and almost nothing available was appropriate for enterprise users. What’s more, the market is evolving so quickly that standards and industry leaders have yet to emerge. There’s no guarantee that early market entrants like Facebook’s Oculus Rift, Samsung’s Gear VR, and HTC’s Vive will continue to exist, never mind set enduring benchmarks.

Nonetheless, it’s already clear that these technologies will have a major impact on both internal and customer-facing business. They will make customer service more accurate, personalized, and relevant. They will reduce human risk and enhance public safety. They will streamline operations and smash physical boundaries. And that’s just the beginning.

Cleveland Clinic: Healing from the Next Room

Medicine is already testing the limits of learning with VR and AR.

sap Q316 digital double feature1 imageseight Chief Procurement Officers Continue To De Emphasize SavingsThe most potentially disruptive operational use of VR and AR could be in education and training. With VR, students can be immersed in any environment, from medieval architecture to molecular biology, in classroom groups or on demand, to better understand what they’re studying. And no industry is pursuing this with more enthusiasm than medicine. Even though Google Glass hasn’t been widely adopted elsewhere, for example, it’s been a big success story in the medical world.

Pamela Davis, MD, senior vice president for medical affairs at Case Western Reserve University in Cleveland, Ohio, is one of the leading proponents of medical education using VR and AR. She’s the dean of the university’s medical school, which is working with Cleveland Clinic to develop the Microsoft HoloLens “mixed reality” device for medical education and training, turning MRIs and other conventional 2D medical images into 3D images that can be projected at the site of a procedure for training and guidance during surgery. “As you push a catheter into the heart or place a deep brain stimulation electrode, you can see where you want to be and guide your actions by watching the hologram,” Davis explains.

The HoloLens can also be programmed as a “lead” device that transmits those images and live video to other “learner” devices, allowing the person wearing the lead device to provide oversight and input. This will enable a single doctor to demonstrate a delicate procedure up-close to multiple students at once, or do patient examinations remotely in an emergency or epidemic.

Davis herself was convinced of the technology’s broader potential during a demonstration in which she put on a learner HoloLens and rewired a light switch, something decidedly outside her expertise, under the guidance of an engineer wearing a lead HoloLens in the next room. In the near future, she predicts, it will help people perform surgery and other sensitive, detailed tasks not just from the next room, but from the next state or country.

Consumers are already getting used to sap Q316 digital double feature1 images3 Chief Procurement Officers Continue To De Emphasize Savingsthinking of VR and AR in the context of entertainment. Companies interested in the technologies should be thinking about how they might engage consumers as part of the buying experience.

Because the technologies deliver more information and a better shopping experience with less effort, e-commerce is going to give rise to v-commerce, where people research, interact with, and share products in VR and AR before they order them online or go to a store to make a purchase.

Online eyewear retailers already allow people to “try on” glasses virtually and share the images with friends to get their feedback, but that’s rudimentary compared to what’s emerging.

Mirrors as Personal Shoppers

Clothing stores from high-end boutiques to low-end fashion chains are experimenting with AR mirrors that take the shopper’s measurements and recommend outfits, showing what items look like without requiring the customer to undress.

Instant Designer Shows

Luxury design house Dior uses Oculus Rift VR goggles to let its well-heeled customers experience a runway show without flying to Paris.

Custom Shopping Malls

British designer Allison Crank has created an experimental VR shopping mall. As people walk through it, they encounter virtual people (and the occasional zoo animal) and shop in stores stocked only with items that users are most likely to buy, based on past purchase information and demographic data.

A New Perspective

IKEA’s AR application lets shoppers envisage a piece of furniture in the room they plan to use it in. They can look at products from the point of view of a specific height—useful for especially tall or short customers looking for comfortable furniture or for parents trying to design rooms that are safe for a toddler or a young child.

Painless Do-it-Yourself Instructions

Instead of forcing customers to puzzle over a diagram or watch an online video, companies will be able to offer customers detailed VR or AR demonstrations that show how to assemble and disassemble products for use, cleaning, and storage.

The customer-facing benefits of VR and AR are inarguably flashy, but it’s in internal business use that these technologies promise to shine brightest: boosting efficiency and productivity, eliminating previously unavoidable risks, and literally giving employers and managers new ways to look at information and operations. The following examples aren’t blue-sky cases; experts say they’re promising, realistic, and just around the corner.

Real-Time Guidance

A combination of AR glasses and audio essentially creates a user-specific, contextually relevant guidance system that confirms that wearers are in the right place, looking at the right thing, and taking the right action. This technology could benefit almost any employee who is not working at a desk: walking field service reps through repair procedures, guiding miners to the best escape route in an emergency, or optimizing home health aides’ driving routes and giving them up-to-date instructions and health data when they arrive at each patient’s home.

Linking to the Hidden

AR technology will be able to display any type of information the wearer needs to know. Linked to facial identification software, it could help police officers identify suspects or missing persons in real time. Used to visualize thermal gradients, chemical signatures, radioactivity, and other things that are invisible to the naked eye, it could help researchers refine their experiments or let insurance claims assessors spot arson. Similarly, VR will allow users to create and manipulate detailed three-dimensional models of everything from molecules to large machinery so that they can examine, explore, and change them.

Reducing the Human Risk

VR will allow users to perform high-risk jobs while reducing their need to be in harm’s way. The users will be able to operate equipment remotely while seeing exactly what they would if they were there, a use case that is ideal for industries like mining, firefighting, search and rescue, and toxic site cleanup. While VR won’t necessarily eliminate the need for humans to perform these high-risk jobs, it will improve their safety, and it will allow companies to pursue new opportunities in situations that remain too dangerous for humans.

Reducing the Commercial Risk

sap Q316 digital double feature1 images5 Chief Procurement Officers Continue To De Emphasize SavingsVR can also reduce an entirely different type of operational risk: that of introducing new products and services. Manufacturers can let designers or even customers “test” a product, gather their feedback, and tweak the design accordingly before the product ever goes into production. Indeed, auto manufacturer Ford has already created a VR Immersion Lab for its engineers, which, among other things, helped them redesign the interior of the 2015 Ford Mustang to make the dashboard and windshield wipers more user-friendly, according to Fortune. In addition to improving customer experience, this application of VR is likely to accelerate product development and shorten time to market.

Similarly, retailers can use VR to create and test branch or franchise location designs on the fly to optimize traffic flow, product display, the accessibility of products, and even decor. Instead of building models or concept stores, a designer will be able to create the store design with VR, do a virtual walkthrough with executives, and adjust it in real time until it achieves the desired effect.

Seeing in Tongues

At some point, we will see an AR app that can translate written language in near-real time, which will dramatically streamline global business communications. Mobile apps already exist to do this in certain languages, so it’s just a matter of time before we can slip on glasses that let us read menus, signs, agendas, and documents in our native tongue.

Decide with the Eye

More dramatically, AR project management software will be able to deliver real-time data at a literal glance. On a construction site, for example, simply scanning the area could trigger data about real-time costs, supply inventories, planned versus actual spending, employee and equipment scheduling, and more. By linking to construction workers’ own AR glasses that provide information about what to know and do at any given location and time, managers could also evaluate and adjust workloads.

Squeeze Distance

Farther in the future, VR and AR will create true telepresence, enhancing collaboration and potentially replacing in-person meetings. Users could transmit AR holograms of themselves to someone else’s office, allowing them to be seen as if they were in the room. We could have VR workspaces with high-fidelity avatars that transmit characteristic facial expressions and gestures. Companies could show off a virtual product in a virtual room with virtual coworkers, on demand.

Reduce Carbon Footprint

If nothing else, true telepresence could practically eliminate business travel costs. More critically, though, in an era of rising temperatures and shrinking resources, the ability to create and view virtual people and objects rather than manufacturing and transporting physical artifacts also conserves materials and reduces the use of fossil fuel.

The strength of digitally enhanced reality—and AR in particular—is its ability to determine a user’s context and deliver relevant information accordingly. This makes it valuable for monitoring and managing employee behavior and performance. Employees could, for example, use the location and time data recorded by AR glasses to prove that they were (or weren’t) in a particular place at a particular time. The same glasses could provide them with heads-up guided navigation, alert employers that they’re due for a legally mandated break, verify that they completed an assigned task, and confirm hours worked without requiring them to fill out a timesheet.

However, even as these capabilities improve data governance and help manage productivity, they also raise critical issues of privacy and autonomy (see The Norms of Virtual Behavior). If you’re an employee using VR or AR technology, and if your company is leveraging it to monitor your performance, who owns that information? Who’s allowed to use it, and for what purposes? These are still open legal questions for these technologies.

Another unsettled—and unsettling—question is how far employers can use these technologies to direct employees’ work. While employers have the right to tell employees how to do their jobs, autonomy is a key component of workplace satisfaction. The extent to which employees are required to let a pair of AR glasses govern their actions could have a direct impact on hiring and retention.

Finally, these technologies could be one more step toward greater automation. A warehouse-picking AR application that guides pickers to the appropriate product faster makes them more productive and saves them from having to memorize hundreds or even thousands of SKUs. But the same technology that can guide a person will also be able to guide a semiautonomous robot.

The Norms of Virtual Behavior

VR and AR could disrupt our social norms and take identity hacking to a new level.

The future of AR and VR isn’t without its hazards. We’ve all witnessed how distracting and even dangerous smartphones can be, but at least people have to pull a phone out of a pocket before getting lost in the screen. What happens when the distraction is sitting on their faces?

This technology is going to affect how we interact, both in the workplace and out of it. The annoyance verging on rage that met the first people wearing Google Glass devices in public proves that we’re going to need to evolve new social norms. We’ll need to signal how engaged we are with what’s right in front of us when we’re wearing AR glasses, what we’re doing with the glasses while we interact, or whether we’re paying attention at all.

More sinister possibilities will present themselves down the line. How do you protect sensitive data from being accessed by unauthorized or “shadow” VR/AR devices? How do you prove you’re the one operating your avatar in a virtual meeting? How do you know that the person across from you is who they say they are and not a competitor or industrial spy who’s stolen a trusted avatar? How do you keep someone from hacking your VR or AR equipment to send you faulty data, flood your field of vision with disturbing images, or even direct you into physical danger?

As the technology gets more sophisticated, VR and AR vendors will have to start addressing these issues.

To realize the full business value of VR and AR, companies will need to tackle certain technical challenges. To be precise, they’ll have to wait for the vendors to take them on, because the market is still so new that standards and practices are far from mature.

sap Q316 digital double feature1 images6 Chief Procurement Officers Continue To De Emphasize SavingsFor one thing, successful implementation requires devices (smartphones, tablets, and glasses, for now) that are capable of delivering, augmenting, and overlaying information in a meaningful way. Only in the last year or so has the available hardware progressed beyond problems like overheating with demand, too-small screens, low-resolution cameras, insufficient memory, and underpowered batteries. While hardware is improving, so many vendors have emerged that companies have a hard time choosing among their many options.
The proliferation of devices has also increased software complexity. For enterprise VR and AR to take off, vendors need to create software that can run on the maximum number of devices with minimal modifications. Otherwise, companies are limited to software based on what it’s capable of doing on their hardware of choice, rather than software that meets their company’s needs.

The lack of standards only adds to the confusion. Porting data to VR or AR systems is different from mobilizing front-end or even back-end systems, because it requires users to enter, display, and interact with data in new ways. For devices like AR glasses that don’t use a keyboard or touch screen, vendors must determine how to enter data (voice recognition? eye tracking? image recognition?), how to display it legibly in any given environment, and whether to develop their own user interface tools or work with a third party.

Finally, delivering convincing digital enhancements to reality demands such vast amounts of data that many networks simply can’t accommodate it. Much as videoconferencing didn’t truly take off until high-speed broadband became widely available, VR and AR adoption will lag until a zero-latency infrastructure exists to
support them.

For all that VR and AR solutions have improved dramatically in a short time, they’re still primarily supplemental to existing systems, and not just because the software is still evolving. Wearables still have such limited processing power, memory, and battery life that they can handle only a small amount of information. That said, hardware is catching up quickly (see The Supporting Cast).

The Supporting Cast

VR and AR would still be science fiction if it weren’t for these supporting technologies.

The latest developments in VR and AR technologies wouldn’t be possible without other breakthroughs that bring things once considered science fiction squarely into the realm of science fact:

  • Advanced semiconductor designs pack more processing power into less space.
  • Microdisplays fit more information onto smaller screens.
  • New power storage technologies extend battery life while shrinking battery size.
  • Development tools for low-latency, high-resolution image rendering and improved 3D-graphics displays make digital artifacts more realistic and detailed.
  • Omnidirectional cameras that can record in 360 degrees simultaneously create fully immersive environments.
  • Plummeting prices for accelerometers lower the cost of VR devices.

Companies in the emerging VR/AR industry are encouraging the makers of smartglasses and safety glasses to work together to create ergonomic smartglasses that deliver information in a nondistracting way and that are also comfortable to wear for an eight-hour shift.

The argument in favor of VR and AR for business is so powerful that once vendors solve the obvious hardware problems, experts predict that existing enterprise mobile apps will quickly start to include VR or AR components, while new apps will emerge to satisfy as yet unmet needs.

In other words, it’s time to start thinking about how your company might put these technologies to use—and how to do so in a way that minimizes concerns about data privacy, corporate security, and employee comfort. Because digitally enhanced reality is coming tomorrow, so business needs to start planning for it today. D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.

Comments

Let’s block ads! (Why?)

Digitalist Magazine