Category Archives: Big Data

3 Ways to Prevent a Data Breach from Becoming an Ordeal

It’s easy to think of a data breach as a one-time event, putting the affected company at risk for a workday and causing residual headaches for maybe a week. But when IT systems aren’t regularly audited for security and layered stopgaps aren’t put in place to mitigate the damage, even significant multinational agencies like Equifax can remain vulnerable for months. How can you make sure you’re not caught sleeping at the wheel when the time comes to put your data security to action?

3 Ways to Prevent a Data Breach from Becoming an Ordeal banner 2 3 Ways to Prevent a Data Breach from Becoming an Ordeal

1. Audit Early, Audit Often

According to a study by Syncsort, nearly two-thirds of companies in the study perform security audits on their systems. Yet digging deeper, they discovered that for those who perform audits, the most common schedule was annual (39%), and another 10% audit every 2 years or more. Considering how sophisticated cyber-criminals have become and how frequent security events like Equifax seem to happen, this is unacceptable. An outdated system or plan removes any challenge hackers may face. And when it can take up to a year for an organization to act on their outdated infrastructure, the consequences of that inaction could multiply exponentially.

2. Don’t Stop at One

The most secure physical structures don’t rely on one layer on integrity. Make sure the structural integrity of your less tangible data and technology stays strong with multiple layers of resilience. Your multi-faced approach should address the vulnerabilities and strengths of the following areas:

  • Port/IP Address
  • Exit Point
  • File Security
  • Field Security
  • Command Control
  • Object Authority

That’s right: the integrity of your data depends on all of these layers, with even one neglected layer potentially being the only open door malicious actors need to capture sensitive information.

3. Communication is Key

In the unfortunate event that your organization suffers a security breach, there’s no need to exacerbate the issue by hesitating to inform the public. Any security event will understandably test the public trust, but you could suffer even more PR damage by withholding significant news for any amount of time. Acting fast isn’t just for IT administrators. Executive staff, retained PR agencies and any other public-facing entities in your organizations must stay on the ball to deliver the “Who, What, Why, Where and When” people need to know.

Download our Whitepaper today and discover the causes and effects of data breaches.

Let’s block ads! (Why?)

Syncsort Blog

To successfully integrate AI, break through the fear barrier

 To successfully integrate AI, break through the fear barrier

Artificial intelligence has a hype problem. The technology earns attention for its rapid developments and unique applications, but there is still a major gap when it comes to real-world implementations. In fact, Gartner data shows that while nearly half of CIOs plan to use AI, only 4 percent have actually started to implement the technology.

So why the lag? AI faces big barriers when it comes to enterprise adoption due to return-on-investment (ROI) concerns and workforce fears that companies design the technology to replace their jobs. These challenges are a result of poor planning strategies when it comes to AI implementation and a misguided understanding of what is required to truly leverage the technology for the betterment of the whole organization.

Redefine AI success

When implementing any type of technology, one of the biggest hurdles businesses face is clearly articulating both long-term and short-term expected ROI. For AI, this challenge can sometimes be exacerbated due to a limited understanding of its capabilities and, more importantly, how insights from the technology translate into existing goals and the bottom line. AI requires initial investments in time and financial resources, and without a firm ROI, it can be challenging to convince the executive team that the implementation will affect the business positively. This is what can lead to stalls or limited implementations of AI.

Instead of just implementing AI because it is supposed to improve the efficiency and accuracy of business practices, businesses need to take the time to better understand why they need the technology in the first place. For example, will implementing an AI solution improve the productivity and success of financial industry traders, or help compliance teams more effectively monitor for criminal activity? Once you’ve established the overarching objective, set up key metrics for meeting them. This means you will need to identify every part of the AI process in the business — from data collection to model application — and measure each part using the right benchmarks or key performance indicators (KPIs) that map the process back to ROI.

Take away the fear of the unknown

Among the workforce, there is sometimes an assumption that companies design the technology to surpass employees, and eventually replace them. This creates an unnecessary competition between man and machine, with subject matter experts (SMEs) not interested in working with or better understanding a technology that they think is intended to make their role in the company obsolete. The truth is that most companies design AI to enable, assist, and augment workers, not replace them. By completing menial tasks that take up people’s valuable time, the technology creates more opportunities for the workforce to innovate, excel, and evolve.

To effectively display the benefits of AI, business leaders should conduct small proofs of value (PoVs) for a given organizational problem, with a focus on a specific goal and its ROI. Rather than a slow “behind closed doors” roll-out, businesses should allow workers and SMEs to be involved right from the start. This will help them understand the ROI and its impact on the business as well as their everyday lives, and could turn AI skeptics into evangelists.

Ensure data interpretations are accurate, relatable, and evolving

Outside of hesitations regarding ROI or the future of the workforce, one challenge those looking to implement AI face is ensuring analysts are correctly interpreting the data to meet the needs of the business. While it is easy to complete a data input and run an algorithm to produce results, it can be much more difficult to then take those results and ensure they actually mean something to the business. This is why it’s important to understand different models of AI to ensure the types of results the machine produces add new value to existing internal processes — while also making certain that personnel is accurately digesting and applying the new insights.

The beauty of an AI algorithm is that companies can design it to learn in a continuous mode. Therefore, it is incredibly important that AI processes not only translate into valuable insight but are also agile in nature and have a system of checks and balances to ensure the data produced best reflects the needs of the business. This includes understanding how false positives factor into the use of AI and which performance metrics make the biggest impact (either false positives or false negatives) on business efficiency and innovation.

While an AI solution may seem to be a one-size-fits-all product, it’s important to remember that the strategy behind the technology needs to be as unique as the business itself. Understanding the true value and impact of AI on both the company’s workforce culture and its bottom line will begin to close the gap between implementations of the technology and ensure optimized use.

Uday Kamath is the chief analytics officer at Digital Reasoning, an AI company that interprets human intentions and behaviors.

Let’s block ads! (Why?)

Big Data – VentureBeat

Expert Interview (Part 2): Elise Roy on Human Centered Design and Overcoming Challenges with Big Data

In case you missed Part 1, read here!

Recently, while Elise was working with NPR, they discussed the fact that episodes of NPR programs posted online did not provide captions. While these shows generally have an article associated with them or a transcript of the conversation, Elise pointed out that NPR might be filtering out a significant portion of the population who might have hearing loss but are still able to appreciate an audio-centered show. Or, those who were completely deaf who liked the pacing captions brought and a less cluttered visual experience.

Expert Interview Part 2 Elise Roy on Human Centered Design and Overcoming Challenges with Big Data banner Expert Interview (Part 2): Elise Roy on Human Centered Design and Overcoming Challenges with Big Data

Because of their conversation, NPR has a better understanding of an entire market they might be missing out on.

Her way of problem-solving is catching on.

“A couple years ago I was telling people about human centered design, they had no idea what I was talking about,” Elise says. “But now they’re starting to recognize the value it provides businesses and starting to see how they can create more targeted responsive solutions.”

Big Data plays an important role in creating more customer-centric solutions. It allows organizations to better understand how to react to the human experience and build more personalized and customized experiences and identify patterns that otherwise might have been difficult to see.

Currently, one of the biggest struggles with integrating the perspective of people with disabilities is that there are such a wide variety of disabilities– it can be challenging to design with each one in mind.

Elise says Big Data can help overcome those challenges.

There are already products on the market that benefit individuals with disabilities that use the power of Big Data and the Internet of Things.

For instance, there are companies developing doorbell home security solutions that alert users to motion and allow them to monitor the door remotely– an ideal solution for individuals with mobility problems. Innovation like this and others including the Roomba or self-driving cars not only make it easier for people with disabilities to live independently but are also products that the general population enjoys as well.

In order to continue to bring innovations like these to market, it will be essential that Big Data be paired with human centered design methods.

“This is because big data can easily be influenced by bias,” Elise says. “For example, we could only collect certain kinds of data and be missing out on a key thing that would get uncovered through the human centered design process during the observation phase.”

Recently, Microsoft hired several experts in bias reduction in Artificial Intelligence when they recognized their AI applications were biased in the sense that they were designed around the beliefs of those who were designing them rather than the people who were going to experience their applications.

Moving forward, Elise believes there needs to be symbiosis between Big Data and the human aspect of design.

Elise’s consulting business is still in its infancy, but she’s excited about potential impact on innovation that of looking at innovation through the lens of the disabled offers for businesses.

“There’s a lot of people who have gotten back to me and said it’s really impacted how they’re thinking about things,” Elise says.

We also have a new eBook focused on Strategies for Improving Big Data Quality available for download. Take a look!

Let’s block ads! (Why?)

Syncsort Blog

Avoiding another cryptocurrency ‘penis’ moment with WatermelonBlock and IBM Watson

 Avoiding another cryptocurrency ‘penis’ moment with WatermelonBlock and IBM Watson

It was a watershed moment in the wonderful world of cryptocurrencies, ICOs, and blockchain technology projects. Prodeum — which promised to revolutionize the fruit and vegetable industry — replaced its website, post-ICO, with a white screen that contained just one word.


It is unclear who perpetrated the scam. While the company looked like a legitimate blockchain startup based in Lithuania, there are various threads that suggest it was an individual in Colombia that perpetrated the scam. And while they only got away with a $ 22,000 worth of ETH (more than the $ 11 claimed in other articles on the subject), other scams have been more fruitful.

Confido managed to walk away with over $ 374k in November 2017.

Today, WatermelonBlock — an AI-powered investment and trading platform for cryptocurrency investors and traders — has announced it is integrating with IBM Watson’s AI computing platform to provide investors with real-time insights and detailed analysis to help identify scams like Prodeum and Confido.

WatermelonBlock takes keywords, hashtags, and metadata terms relating to cryptocurrencies and ICOs from a wide variety of social and traditional media APIs. IBM Watson then measures this data for sentiment. It also weighs each message author individually according to their social influence and reach.

WatermelonBlock then uses its algorithms to compute a percentage and index score for each network, known as the MelonScore.

So can this technology help with carefully constructed scams? Prodeum was hard to detect because it looked like a regular ICO, so how does the MelonScore help with those situations?

“WatermelonBlock is designed with retail consumers in mind,” Elliot Rothfield, cofounder and creative director at WatermelonBlock told me. “This scam is a product of a developing market. During an era of ferment, rapid growth and changing standards make discussion making difficult. By combining sentiment analysis — the voice of the people — with weighted influencer sentiment — the voice of the knowledgeable — users can circumvent being entangled in a ‘Penisgate’ controversy.”

In addition to helping investors avoid scams, WatermelonBlock is a useful source of intelligence for the ICO market in general, the majority of which are legitimate projects.

By continually scanning the internet for sentiment data and analyzing both tone and author credibility, the AI-powered market predictions can help investors to spot potential winners too. Whenever sentiment changes in a particular cryptocurrency, the system notifies users in real-time, giving them the opportunity to anticipate market fluctuations and inform appropriate action.

That being said, the MelonScore is not a predictor or future market value.

“The MelonScore is unique in that it will represent the sentiment of the masses with respect to cryptocurrency,” Rothfield said. “AI is used to create a ranking system unique to WatermelonBlock, built on big data sets gathered from social media, blogs, news, microsites and other public forums.”

The use of IBM Watson for AI-powered analysis is just the beginning for WatermelonBlock.

“WatermelonBlock is not just a single application but suite of AI analysis tools,” Rothfield said. “WatermelonAnalytics will be introduced soon as a small business sentiment analyzer. WatermelonAnalytics will allow businesses to search, analyze and compare individual phrases, hashtags or direct URLs to harness industry-specific insights. Users will be able to create their own private index, allowing them to track not only the sentiment of a brand, but the sentiment of certain phrases, products, and releases. WatermelonBlock’s AI and proprietary algorithms are versatile and will be used in many different products and industries. Stay tuned for WatermelonMusic too.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Expert Interview (Part 1): Human Centered Design and Elise Roy on Transforming Disability into Innovation

Elise Roy says that losing her hearing when she was 10 years old has been one of the greatest gifts she’s ever received.

Early on, she viewed her loss as something she had to deal with and overcome. That perspective has shifted though.

“My disability has become an asset,” Elise says. “Rather than something I have to deal with, it’s a tool.”

Expert Interview Part 1 Human Centered Designer Elise Roy on Transforming Disability into Innovation banner Expert Interview (Part 1): Human Centered Design and Elise Roy on Transforming Disability into Innovation

A tool Elise has leveraged in just about every job she’s taken on – as one of the country’s few deaf lawyers, as an artist and designer, and as a human rights activist.

Most recently, she’s started working as a consultant, using her unique perspective to help organizations take a different approach to their design practices. Her goal is to show the groups she works with that incorporating a deeper understanding of how the disabled navigate the world will lead to extraordinary innovation and results.

“I believe that these unique experiences that people with disabilities have is what’s going to help us make and design a better world … both for people with and without disabilities,” she shared in her TED talk.

She consults through the lens of Human Centered Design, trying to develop the best product by defining problems and understanding constraints, observing people in real-world situations, asking questions and then using prototyping to test it quickly and cheaply all while keeping the end users– the customers in focus.

Elise learned first-hand how effective this method of problem-solving is back when she was taking a fabrication class in art school. The tools she was using for woodworking would sometimes kick back at her. Generally, before doing this they would emit a sound. But because of her hearing loss, Elise wasn’t able to hear it. In response, she developed a pair of safety goggles that give a visual warning when the pitch of the machine changed. The product can help protect both those who are hearing impaired and those with no hearing loss.

She points to other widely used inventions that were initially created for people with a disability, too. Email and text messaging, for instance, were designed for deaf users.

The OXO potato peeler was designed to help individuals with arthritis but was adopted by the general population because of how comfortable it is to use. There are tech companies currently developing apps and websites who are looking to people with dyslexia and intellectual disabilities for inspiration on simplifying design and offering an easier-to-use interface for everyone.

Check back for part 2 where Elise goes more in depth with what she is doing with Human Centered Design.

Also, we have a new eBook focused on Strategies for Improving Big Data Quality available for download.

Let’s block ads! (Why?)

Syncsort Blog

Generation Z and Mainframe Programming

When you think of mainframe programming, images of scruffy old men stuck in the 1960s might come to mind. Yet as Caroline McNutt, a young mainframe programmer at Ensono explained recently, this image does not reflect reality.

Ensono provides managed IT services for a variety of infrastructure, including mainframes. McNutt, who has worked with Ensono’s mainframe teams in the company’s Conway, Arkansas location since 2016, recently spoke to us about the state of the mainframe, the role of young women in computer science, and more.

Here’s what McNutt had to say.

Generation Z and Mainframe Programming banner Generation Z and Mainframe Programming

What’s your role at Ensono, and how long have you been in the position?

I am an associate mainframe systems programmer. I’ve been with Ensono for about two years.

I first worked with Ensono in summer 2016 for a two-month college internship. After I graduated, they hired me to work full-time.

What does your day-to-day mainframe programming work entail?

I’ve been going through some of the older, legacy processes and trying to automate them through SAS. I also work on mainframe monitoring.

I work with z/OS. Other people at the company work with mainframe VM systems, but I kind of like my green screen.

How much interest do you see in mainframes among other young people and women?

Among women, a lot! On my current team at Ensono, we’re about 50/50 males and females. And there are quite a few females across the company as a whole.

[For more on women in the technology industry, check out our recent blog post “Women in Tech: Recognizing Female Leadership in Technology.”]

As for young people, most people at the company are older than me. But I’m twenty-four, so that’s not necessarily saying much.

bigstock Businesswoman pointing to word 68287039 600x Generation Z and Mainframe Programming

What was your experience with becoming a woman programmer who focuses on mainframes like?

At Ensono, I have faced no challenges at all as a woman programmer.

In college, though, things were harder. Even female teachers looked down on [women majoring in computer science]. A professor told me I was only hired for mainframe programming because I was a quota filler. And as I progressed further into the computer science degree program, [women programmers] would drop off.

Learning about mainframes in college was hard, too, even as a computer science student. They don’t teach mainframes. I didn’t even know what a mainframe was at first. And I think that’s a problem.

Given the lack of coverage of mainframes at universities, what do you think the future looks like for mainframes?

I definitely don’t think the mainframe is going anywhere for the foreseeable future.

A lot of people talk about cloud coming in and replacing mainframes. But cloud performance just doesn’t match what we already have in place on the mainframe.

Plus, a lot of the time, mainframes have been around for so long that the effort it would take to convert a mainframe to another platform would be so costly and time-intensive that it’s not practical to do that.

I definitely feel like I have a stable career here working on mainframes.

Download our eBook, Data Encryption in the Mainframe World, for even more on mainframes!

Let’s block ads! (Why?)

Syncsort Blog

Ctrl-labs’ armband lets you control computer cursors with your mind

Controlling a mouse pointer with your mind may sound like science fiction, but Ctrl-labs, a startup based in New York City, is working hard to make it a reality.

I recently swung by the company’s new digs in Manhattan — a high rise suite overlooking Herald Square, a few blocks south of the Theater District, overlooking Herald Square. It had been two weeks since Ctrl-labs’ employees moved into the Midtown office, lead scientist Adam Berenzweig told me, and the smell of fresh paint still hung in the air.

“We haven’t finished unpacking the furniture,” he said.

Ctrl-labs can afford the upgrade. In June, it raised $ 28 million in an investment round led by Lux Capital and GV (formerly Google Ventures), the venture capital arm of Alphabet (Google’s parent company). The two join a long, growing list of high-profile backers that includes the Amazon Alexa Fund, Paul Allen’s Vulcan Capital, Peter Thiel’s Founders Fund, Tim O’Reilly, Slack founder and CEO Stewart Butterfield, Warby Parker CEO Dave Gilboa, and others.

What convinced those tech luminaries to fund the three-year-old neuroscience and computing startup, I’d soon find out, feels a little bit like magic.

Finding the neural link

Thomas Reardon, the founder and CEO of Ctrl-labs (formerly Cognescent), was something of a child prodigy. He took graduate-level math and science courses at MIT while in high school and spearheaded a project at Microsoft that became Internet Explorer. A few years later, he enrolled in Columbia University’s classics program, where he studied neuroscience and behavior and went on to earn his Ph.D.

It was in 2015 at Columbia that Reardon, along with fellow neuroscientists Patrick Kaifosh and Tim Machado, conceived of Ctrl-labs and its lofty mission statement: “to answer the biggest questions in computing, neuroscience, and design.” After three years of research and development, the team produced its first product: an armband that reads signals passing from the brain to the hand.

The armband — a bound-together collection of small circuit boards, each soldered to gold contacts meant to adhere tightly to forearm skin — is very much in the prototype stages. A ribbon cable connects the contacts to a Raspberry Pi in an open plastic enclosure, which in turn connects wirelessly to a PC running Ctrl-labs’ software framework.

It’s deceptively unsophisticated.

 Ctrl labs’ armband lets you control computer cursors with your mind

Above: A view from Ctrl-labs’ new offices in New York City.

Image Credit: Kyle Wiggers / VentureBeat

Berenzweig thinks of the armband as an interface much like a keyboard or mouse. But unlike most peripherals, it uses differential electromyography (EMG) — an effect first observed in 1666 by Italian physician Francesco Redi — to translate mental intent into action.

How does it do that? By measuring changes of electrical potential, which are caused by impulses that travel from the brain to hand muscles through lower motor neurons. This information-rich pathway in the nervous system comprises two parts: upper motor neurons connected directly to the brain’s motor center, and lower axons that map to muscle and muscle fibers. Neurotransmitters run the length of that long neural pathway and turn individual muscle fibers on and off — the biological equivalent of binary ones and zeros.

The armband is quite sensitive to these. Before Berenzweig kicked off a demo of the wristband, he made sure to put distance between it and a metal pushcart nearby.

“It acts like an antenna,” he said, “so it’s susceptible to interference.”

While the armband’s 16 electrodes monitor the electric fields generated by nerves in the wearer’s arm, Ctrl-labs’ software ingests the data, and with the help of a machine learning algorithm trained using Google’s TensorFlow, distinguishes between the individual pulses of each nerve.

Berenzweig, who had put on an armband before I arrived, showed me on a PC an EKG-like graph of colored lines representing each contact. As he lifted a digit, one of the lines tremored slightly. Then he let his hand rest at his side, motionless. It tremored again.

 Ctrl labs’ armband lets you control computer cursors with your mind

Above: Ctrl-labs’ prototype armband.

Image Credit: Kyle Wiggers / VentureBeat

The wondrous thing about EMG, Berenzweig explained, is that it works independently of muscle movement; generating a brain activity pattern that Ctrl-labs’ tech can detect requires no more than the firing of a neuron down an axon, or what neuroscientists call action potential.

That puts it a class above wearables using electroencephalography (EEG), a technique that measures electrical activity in the brain through contacts pressed against the scalp. EMG devices draw from the cleaner, clearer signals from motor neurons, and as a result are limited only by the accuracy of the software’s machine learning model and the snugness of the contacts against the skin.

That’s not to suggest they’re perfect. Waterloo, Ontario-based startup Thalmic Labs began shipping an EMG armband in 2013 — the Myo — that can detect muscle movements, recognize gestures and joint motion, and map neural signals to keys on a keyboard and video game hotkeys. But many of the less-than-stellar reviews mention the inconsistency of its gesture recognition.

Ctrl-labs prototyped its machine learning algorithms with Myo before developing its own hardware, and Berenzweig owns one personally. But the current iteration of Ctrl-labs’ armband is far more precise than the Myo, and can work anywhere on the forearm or upper arm. Future versions will work on the wrist.

He explained this to me as he typed a few commands into a Linux terminal and fired up the first demo. A likeness of a human hand appeared onscreen and Berenzweig manipulated it with his fingers, their movement mirroring that of his digital doppelganger.

Then he strapped the bracelet on my arm. I had worse luck — the thumb on the computerized hand reflected the motions of my thumb, but the index and pinkie finger didn’t — they remained stiff. Berenzweig had me recalibrate the system by angling my wrist slightly, but to no avail.

He chalked it up to the demo’s generalized machine learning model. Experimental versions of the software, he said, are performing much better.

In a second demo, I watched as Berenzweig moved a computer cursor toward a target. Unlike in the first, movements in the demo actively train a neural net, tuning the system to each user’s neural idiosyncracies.

When it came time again for my turn, I wasn’t exactly sure how to control it. But after a trepidatious start in which the cursor made maddening laps around the target, coming close to it but not quite touching it, the algorithm — and by extension, precision — improved drastically. Within just a few seconds, moving the cursor with thought became almost second nature, and I was able to steer it up, down, and to the left and write by thinking about moving — but not actually moving — my hand.

Berenzweig believes this kind of algorithmic learning, which is crucial to the system’s accuracy, could be gamified in other ways. “We’re trying to find the right way to approach it,” he said.

An eye on VR — and smartphones

Ctrl-labs’ armband won’t be relegated to the lab for much longer. By the end of this year, the company plans to ship a developer kit in small quantities and make available software that will expose the band’s raw signals. The final design is in flux, and at least a few will be manufactured in-house.

Pricing hasn’t been decided, though Berenzweig said it will be higher than the eventual commercial model’s price point.

Around the corner from the demo and adjacent to a room with a MakerBot (which the team uses to quickly prototype shells), Berenzweig showed me a poster board of concepts and potential form factors. Some looked not unlike Android Wear smartwatches — while the developer kit will have to be tethered to a PC for some processing, he said, the processing overhead is such that all of the hardware will eventually be self-contained.

As for what Ctrl-labs expects its early adopters to build with it and for it, video games top the list — particularly virtual reality games, which Berenzweig thinks are a natural fit for the sort of immersive experiences EMG can deliver. (Imagine swiping through an inventory screen with a hand gesture, or piloting a fighter jet just by thinking about the direction you want to fly.)

But Ctrl-labs is also thinking smaller. Not too long ago, it demonstrated to Wired a virtual keyboard that maps finger movements to PC inputs, allowing a wearer to type messages by tapping on a tabletop, and at the 2018 O’Rielly AI conference in New York City, Reardon spoke about text messaging apps for smartphones and smartwatches that let you peck out replies one-handed. Berenzweig, for his part, has experimented with control schemes for tabletop robotic arms.

“You know how early versions of Windows used to ship with Minesweeper and Windows sort of became known for it?” We need to find our Minesweeper,” he said.

 Ctrl labs’ armband lets you control computer cursors with your mind

Above: A few Ctrl-labs armband engineering samples.

Image Credit: Ctrl-labs

One field of research Ctrl-labs won’t be investigating is healthcare — at least not at first. While Berenzweig agrees that the tech could be used to help stroke victims and people with degenerative neural diseases like amyotrophic lateral sclerosis (ALS), he says those aren’t applications the company is actively exploring. Ctrl-labs is loath to submit its hardware for approval by the Federal Food and Drug Administration, a potentially years-long process. (Reardon’s stated goal is to get a million people using the armband within the next three to four years.)

“We’re focusing on consumers right now,” Berenzweig said. “We think it has medical use cases, but we want it to be a consumer product.”

By the time Ctrl-labs hits retail store shelves, it’ll likely have competition. Thalmic Labs is developing a second-generation EMG armband, and a new a venture funded by SpaceX and Tesla head Elon Musk, NeuraLink Corp, aims to develop mass-market implants that treat mood disorders and help physically disabled people regain mobility.

Not to be outdone, Facebook is researching a kind of telepathic transcription that taps the brain’s speech center. In September 2017 at the MIT Media Lab conference, project lead Mark Chevillet told the audience that it plans to detect brain signals using noninvasive sensors and diffuse optical tomography. Effectively, it would allow a user to type words simply by thinking them.

Berenzweig is convinced that Ctrl-labs’ early momentum, plus the robustness of its developer tools, will help it gain an early lead in the brain-machine interface race.

“Speech evolved specifically to carry information from one brain to another. This motor neuron signal evolved specifically to carry information from the brain to the hand to be able to affect change in the world, but unlike speech, we have not really had access to that signal until this,” he told Wired in September 2017. “It’s as if there were no microphones and we didn’t have any ability to record and look at sound.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Webcast: Introducing the Latest in High Availability from Syncsort

Syncsort has released their latest on-demand webcast: Introducing the Latest in High Availability from Syncsort. In a recent survey of 5,632 IT professionals – on the topic of data protection strategies and IT priorities – 67% responded with data availability as the top measure of IT performance. These statistics clearly state how the impact of downtime on customers, partners and employees is increasingly visible and costly in today’s constantly connected world.

Introducing the Latest in High Availability from Syncsort banner Webcast: Introducing the Latest in High Availability from Syncsort

Syncsort’s market-leading portfolio of high availability and disaster recovery solutions continues to expand and evolve to meet the demands of organizations faced with exploding data volumes, limited IT resources and intensifying pressure for non-stop access to data and systems.

Learn about the latest developments in our IBM i high availability portfolio that can help your organization meet their critical recovery point and recovery time objectives.

View the webcast now!

Let’s block ads! (Why?)

Syncsort Blog

Stanford researchers harnessed AI to generate memes

Artificial intelligence can do just about anything these days, like produce 3D renderings of an object from snapshots, defeat facial recognition systems, or track wildlife in the Serengeti. It’s also surprisingly good at generating memes.

In a white paper titled “Dank Learning” (yes, really), Abel L. Peirson and E. Meltem Tolunay, the two lead scientists on the project, describe a neural network that ingests, gains an understanding of, and spits out internet in-jokes. The AI consists of a convolutional neural network (CNN) that takes images as inputs and translates them into mathematical representations called vector embeddings (an encoder), and a long short-term memory (LSTM) recurrent neural network (RNN) that creates captions (a decoder).

Here’s a tricky question: can you tell which of the following memes were created by the neural net?

The Stanford researchers fed the system more than 400,000 images with 2,600 unique image label-pairs from — specifically “advice animal”-style memes, pictures with humorously captioned specific characters (e.g., a cat in a bathrobe) — using a Python script. They then instructed human subjects to judge each image on its “hilarity,” and had them guess whether they were produced by a person or the neural network.

“This allows for relatively simple collection of datasets,” Peirson and Tolunay wrote. “In this paper, we specifically refer to meme generation as the task of generating a humorous caption in a manner that is relevant to the initially provided image, which can be a meme template or otherwise.”

The verdict: humans were able to pick out the algorithmically created memes about 70 percent of the time, but graded them fairly evenly on wittiness.

“The average meme produced from both is difficult to differentiate from a real meme and both variants scored close to the same hilarity rating as real memes, though this is a fairly subjective metric.”

So what about the gallery of memes above? The “big data” meme was the only one of the three crafted by a human — the others were the work of the neural net. Watch out, dank memesters — the robots are coming for you.

Let’s block ads! (Why?)

Big Data – VentureBeat

6 questions you must answer to identify your best way to implement AI

 6 questions you must answer to identify your best way to implement AI

Commodity artificial intelligence-as-a-Service (AI-aaS) offerings are popping up everywhere. Just as you can whip out a credit card and spin up a virtual data center in Amazon, Microsoft, or Google’s cloud, you can now call on previously trained machine learning clusters to handle your AI chores.

Using an API, you can upload a photo library to Google Cloud Vision or Amazon Rekognition to have the program scan it for objects, faces, logos, or terms of service violations in seconds, for fractions of a penny per image. Any business can now deploy the same technology used by the Google Photos app and Amazon Prime Photos to automatically categorize and label smartphone snaps based on the people, objects, and landmarks inside them.

Real estate companies use image recognition to allow prospective home buyers to search for houses whose appearance pleases them. Car companies like Kia use AI to customize marketing campaigns based on the photos people post to social media. Cities can also use the technology to understand traffic patterns and make better decisions about infrastructure projects. And so on.

This all sounds wonderful, revolutionary, and scalable, but as with other commoditized technologies, the off-the-shelf, one-size-fits-all approach doesn’t work for all companies or business goals, which raises the question: For your AI needs, should you choose a commodity cloud AI service or opt for a more comprehensive custom solution? As AI becomes more and more critical to businesses, three basic options have emerged:

  1. Use a commodity AI-aaS offerings such as Amazon AI (including Rekognition), Clarifai, CloudSight, Google Cloud Vision, IBM Watson, or Microsoft Cognitive Services. These offer a relatively narrow range of AI functions, mostly enabled via APIs for text and image recognition, as well as natural language processing (NLP).
  2. Engage third-party applied AI companies that specialize in a broader and more customized range of vertical AI services. This sometimes involves an on-premises solution for companies that don’t want to share their data in the cloud, or focuses on a particular vertical, such as finance, health care, marketing, or retail.
  3. Build out a full-stack machine learning system from scratch, using your own experts and data. This is by far the most complex option and is primarily for organizations where AI is essential to their core value and revenue.

Each of these options makes sense for certain kinds of business users. Exactly which one is your best option depends on how you answer the following questions.

1. What kind of AI jobs do you need to do?

AI is helpful in a wide range of business use cases, including predictive analytics, forecasting, process optimization, personalization, and many others. But while IBM Watson offers some additional analytics and language processing tools, many commodity AI-aaS vendors are focused on the tasks most commonly associated with machine learning: text and image recognition. These serve as out-of-the-box solutions for specific, narrow tasks for organizations that have main functions that do not center around AI — say, a local law enforcement authority that wants to quickly scan image databases against a picture via facial recognition, or editorial sites that want to moderate comment sections (or images) for objectionable content.

If you have any other or more complex AI needs in addition to those clearly defined tasks, or massive amounts of data (proprietary or otherwise), you’ll likely want to engage an applied AI partner, or embark on your own internal full-stack AI setup (more on that later).

2. What kind of volume can you afford?

Image and text recognition services are increasingly commoditized, and sometimes even free at low volumes. But if you’re doing them at scale, the costs can grow exponentially.

Say you’re running a small photo-sharing service and need to scan and analyze 10,000 individual images a month to ensure they don’t contain objectionable content. On Amazon Rekognition that would cost $ 10; Google Cloud Vision would charge you $ 13.50, but that also includes label detection (i.e., identifying whether it’s a picture of a cat, a bicycle, a bagel, etc). Label detection would also be useful for, say, realtors who want to flag kitchens featuring particular types of cabinets or countertops, or doctors who need to identify different types of skin lesions.

If you were operating on the scale of Pinterest, however, whose users upload 14 million images a day, the economics of image safety search would change significantly. Even at the steeper discounts offered for large volumes of images, it would cost a service of that size about $ 16,500 a day — just over $ 5.1 million a year — with Google Cloud Vision.* Using Amazon would cost $ 10,600 per day and around $ 2.3 million annually.

Of course, the cost also goes up depending on how much information you’re asking the AI to provide. At its steepest discount, Google Cloud Vision adds another $ 0.0006 per image for detecting text, plus the same amount for detecting faces, logos, and landmarks, respectively; add all that to labeling and content scanning, and a service on the scale of Pinterest is looking at spending more than $ 17.6 million annually.

Suddenly those inexpensive commodity cloud services don’t seem so cheap anymore.

3. How good do the results have to be?

Though commodity AI-aaS machine learning models have been trained against very large data sets — as when Google used 200,000 images from the Metropolitan Museum of Art to train its BigQuery engine — that doesn’t mean they’re always going to produce accurate results.

Upwork recently published a comparison of six leading image recognition APIs to gauge how accurate they are at labeling images of animals, people, text, and objects. The test wasn’t rigorously scientific, but the results were fascinating.

Each AI engine’s predictions were on target with some images and far off base with others. For example, all excelled at identifying a parked car on an urban street, but some stumbled when shown two cats, the Grand Canyon, a bottle of wine, or three people standing on a sidewalk.

Shown a realistic portrait of a Western frontiersman leading his pack-laden horse, Google CV correctly identified it as a painting, while Watson suggested “camel racing” and Microsoft’s best guess was the surreal “person riding a surfboard on top of a book.”

A big advantage to going with an applied AI solutions provider or consultant (or running your own AI stack) is the ability to train the machine learning models in more customized ways and fine-tune the results to increase accuracy. For example, if you’re building a wine recommendations app, instead of just labeling a bottle as “wine” or “pinot noir,” you might want to drill down into more specifics, such as the vintner, region, or vintage. Or if, say, you’re a brewer who wants to automatically identify your beer’s logos on social media images even when they’re only partially showing and the bottles are tipped over — a stiff challenge in the facial and image recognition process known as occlusion — then you would benefit from an applied AI or DIY full-stack solution.

4. How much flexibility do you require?

Commodity AI-aaS offers far less control and flexibility than an applied AI or in-house full-stack solution in other ways, too. For example, Amazon Rekognition offers thousands of image labels, but not always ones your business needs. Amazon might be able to tag “kitchen” or “sink,” for example, but not necessarily “Kohl faucet” or “tile backsplash”. To add new labels or change how Amazon flags images for potentially objectionable content, you’ll need to request it. Amazon requires six to eight weeks to add new types of moderated content and does not promise to honor all requests.

Google Cloud Vision places limits on the size and number of images you can feed through the API at any time, and all services limit the kinds of files they will accept and types of data they can recognize. Amazon accepts only PNG and JPEG files, for example. Only three of the six AI-aaS vendors mentioned here offer optical character recognition (OCR) along with image recognition; only Clarifai accepts video as well as still images. In other words, if all your real- estate images are in RAW format, you may need to convert them first. If you want a service that reads the labels on images of wine bottles, you’ll want OCR.

The old Henry Ford line about how you can have a Model T in any color (as long as it’s black) applies to AI as a service — your options will be limited.

5. What kind of performance do you need?

Latency is the quiet killer for applications that require near-real-time image or text processing. Clarifai notes that its API responds within 200 to 400 milliseconds for a single image sent from inside the United States; add more images or video, or increase your distance, and the latency grows worse. CloudSight, on the other hand, needs from 6 to 12 seconds to respond, possibly because it relies on human crowdsourcing to manually tag some images.

As with all cloud services, reliability is also an issue; your ability to process text or images is entirely dependent on the availability of third-party servers. Anyone who’s suffered through the rare AWS or Google outage can tell you how frustrating that can be. Even one extended outage is one too many.

Having an AI stack on-site will largely negate the latency issue and give you more control over availability.

6. How much in-house expertise do you have?

AI engineers are in huge demand. Many organizations simply don’t have the necessary talent on hand, and recruiting that talent means competing for candidates with companies such as Google, Microsoft, Facebook, and Amazon, which are aggressively investing and innovating in the AI arena. And even if you do have the resources to hire top AI engineers, you’ll still have trouble finding ones who have domain expertise around your particular business.

If you’re just experimenting with incorporating AI into your business, or you want to offer basic low-volume AI functionality as a service to customers, then cloud-based services can be a good way to get started. But if you need more scale, greater flexibility, domain expertise, data privacy, or services that a commodity cloud service doesn’t offer, and you don’t have the desire or resources to recruit and hire a full AI team in-house, then finding a third-party applied AI provider is probably a better way to go.

While ramping up will be a business and technological challenge, creating your own full AI stack can be significantly advantageous for your organization in the long run, if AI is your core value. But for everyone else, getting on board with a AI-aaS solution or applied AI partner is essential. As noted by Harvard Business Review, AI is poised to be a transformational technology — on a par with the steam engine, electricity, and the internet. Organizations that don’t get ahead of that train are in danger of being run down.

*Google Cloud Vision’s pricing only accounts for volumes up to 20 million images per month. Presumably, there are discounts for higher volumes available upon request, but even then, the expense is considerable.

Ken Weiner is CTO at GumGum, an applied computer vision company.

Let’s block ads! (Why?)

Big Data – VentureBeat