Category Archives: Big Data

Belong partners with American Cancer Society to help beat cancer with AI

 Belong partners with American Cancer Society to help beat cancer with AI

In the battle against cancer, access to high-quality information, data, and assistance are invaluable.

Today, Belong: Beating Cancer Together — whose app connects patients to public and private chats with doctors and professionals — has announced a partnership with the American Cancer Society. The partnership adds a closed forum for American Cancer Society members, allowing them access to its online patient-doctor community.

Belong isn’t just an app that connects people. It is also using AI and machine learning, combined with big data, to help provide patients with personalized information, education, and assistance.

“Belong is applying state of the art machine learning, AI, and NLP technologies to develop one of the world’s most powerful real-world patient-generated data lakes,” Eliran Malki, CEO and cofounder at Belong, told VentureBeat in an interview. “This is disruptive due to both the unprecedented quality of the real-world data it generates and its longitudinal nature. We also use patent-pending d-PRO (digital Patient Reported Outcome) features and other methodologies to build this data lake.”

Of course, Belong’s application of AI doesn’t end at creating a data repository. The organization is using cognitive computing technologies to better understand what happens to patients as they fight their cancer.

“Belong applies machine learning, AI, and medical neural networks to our data lake to better understand patient profiles and patient journeys,” Malki said. “These allow us to (retrospectively) identify challenges, critical decision points and journey ‘bottlenecks’ cancer patients and their families face. We then communicate some of these crowdsourced insights to cancer patients on our platform.”

That’s important because the community, information, and insights on offer can help patients decide what to do next.

“Examples of crowdsourced insights can range from patients’ tips on how to cope with specific side effects to how choosing the right MRI machine can be relevant to diagnosis phases, what some of the warning signs to look out for are, and more,” Malki said.

This new relationship adds the American Cancer Society’s information, insights, and resources to the Belong network.

“The partnership with the ACS brings a new layer of engagement with the ACS’ range of valuable resources, which are now made available to patients through a bi-directional mobile-based communication platform,” Malki said. “So now all patients using Belong have direct access to the ACS’ resources, their information specialists, and other experts who do extraordinary work in both responding to patients and their families on the app and in providing targeted access to the ACS’ services and knowledge base.”

The American Cancer Society is leading a forum on the Belong platform, named “American Cancer Society4U.” Belong users will be able to read relevant information that connects them to resources, as well as contacting the American Cancer Society via the app.

So what’s next for Belong?

“Belong’s mission is to analyze the massive and information-rich real-world data and the patient and caregiver journeys that we generate and eventually use that to help advance science and cancer research toward finding better and more effective solutions for patients,” Malki said. “In other words, we aim to help scientists identify successful patient journeys, and on the other hand identify unsuccessful journeys that should be avoided.”

The Belong app is available for both Android and iOS via the company’s website.

Let’s block ads! (Why?)

Big Data – VentureBeat

New eBook! IT Operations Checklist for z/OS Mainframes

For decades, monitoring the overall health of IT components running on the IBM z/OS mainframe has been relegated to vendors specializing in real-time monitoring of performance and availability. While they provide deep analysis into the individual technology silos, there’s still a gap in the overall approach to providing an integrated and holistic view of IT operations within the mainframe environment.

For a comprehensive start to ensuring the health, availability, and security of your z/OS mainframe systems, download our new eBook IT Operations Checklist for z/OS Mainframe.

blog banner IT Ops Chklist zOS mainframe New eBook! IT Operations Checklist for z/OS Mainframes

Explore how new technologies have emerged that enable you to capture mainframe information and quickly move it to an open-system based analytics platform to be integrated, correlated, analyzed, and visualized.

Get the eBook now!

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Why Data Quality Should be Part of Your Disaster Recovery Plan

When you think of disaster recovery, data quality is likely not the first thing that comes to mind. But data quality should factor prominently into your disaster recovery plan. Here’s why.

Disaster recovery is the discipline of preparing for unexpected events that can severely disrupt your IT infrastructure and services, and the business processes that depend on them.

The disasters that necessitate disaster recovery can take many forms. They could be natural disasters, like a major storm that wipes out a data center. They could be security events, wherein hackers hold your data for ransom or bring your services down using DDoS attacks. They could be an attack by a disgruntled employee who deliberately wipes out a crucial database.

blog banner Data Quality Magic Quadrant Why Data Quality Should be Part of Your Disaster Recovery Plan

What all types of disasters have in common is that it’s virtually impossible to know when they’ll occur, or exactly what form they’ll take.

Forming a Disaster Recovery Plan

That’s why it’s essential to have a disaster recovery plan in place. Your plan should:

  • Identify all data sources that need to be backed up so that they can be recovered in the event of a disaster.
  • Specify a method or methods for backing up the data.
  • Identify how frequently backups should occur.
  • Determine whether on-site data backups are sufficient for your needs, or if you should back up data to a remote site (in case your local infrastructure is destroyed during a disaster).
  • Specify who is responsible for performing backups, who will verify that backups were completed successfully and who will restore data after a disaster.

If you need help building and implementing a disaster recovery plan, you can find entire companies dedicated to the purpose. With the right planning and skills, however, there is no reason that you cannot also maintain an effective disaster recovery yourself. Regardless of whether you outsource disaster recovery or not, the most important thing is simply to have a plan in place. (See also: 5 Tips for Developing a Disaster Recovery Plan)

blog time money2 Why Data Quality Should be Part of Your Disaster Recovery Plan

Data Quality and Disaster Recovery

Now that we’ve covered the basics of disaster recovery, let’s discuss where data quality fits in.

Put simply, data quality matters in this context because whenever you are backing up or restoring data, you need to ensure data quality. Since data backups and restores are at the center of disaster recovery, data quality should be factored into every phase of your disaster recovery plan.

After all, when you’re copying data from one location to another to perform backups, data quality errors are easy to introduce for a variety of reasons. You might have formatting issues copying files from one type of operating system to another because of different encoding standards. Data could become corrupted in transit. Backups could be incomplete because you run out of space on the backup destination. The list could go on.

It’s even easier to make data quality mistakes when you’re recovering data after a disaster. Even the most prepared organization will be working under stress when it’s struggling to recover data after a disaster. The personnel performing data recoveries may not be familiar with all the data sources and formats they are restoring. In the interest of getting things up and running again quickly – a noble goal when business viability is at stake – they may take shortcuts that leave data missing, corrupted or inconsistent.

All of the above are reasons why data quality tools should be used to verify the integrity of backed-up data, as well as data that is recovered after a disaster. It’s not enough to check the quality of your original data sources, then assume that your backups and the data recovered based on those backups will also be accurate. It might not be, for all the reasons outlined above and many more.

The last thing your business needs after it has suffered through and recovered from a disaster is lasting problems with its data. To prevent a disaster from having a lasting effect on your business, you must ensure that the data you’ve recovered is as reliable as your original data.

Syncsort’s data quality software and disaster recovery solutions can help you build your disaster recovery plan. Learn about why Syncsort is a leader for the 12th consecutive year in Gartner’s Magic Quadrant for Data Quality Toolsreport.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Trending Now: Machine Learning Has Arrived

Just a few years ago we were still wondering, “Is machine learning the real deal?” If you aren’t already keenly aware, you should know that machine learning has arrived!

The Year’s Hottest Topic

During this past year machine learning has been on everyone’s lips. From Big Data to mainframe operations, machine learning emerged as a key theme at nearly every conference Syncsort attended:

The Experts Weigh In

Machine learning is not only impacting the tech world, but also helping to improve various business processes. A number of influencers we’ve spoke with over the last 18 months have had a thing or two to say about how machine learning is affecting their area of expertise.

The Impacts of Machine Learning

Until recently, only very large organizations had the data management capabilities to leverage machine learning effectively. This is no longer the case. New tools and technologies are enabling companies of all sizes to begin experimenting with machine learning.

Machine learning and artificial intelligence are reshaping the technology world. But it is only as effective as the data that drives it. In other words, if you want to implement effective machine learning, also you need to pay attention to data quality.

More and more practical applications of this technology are starting to emerge. One example is using machine learning to fight plagiarism.

blog banner MachineLearning Trending Now: Machine Learning Has Arrived

Machine Learning Hits the Mainframe

In March, mainframe expert Alan Radding told us, “The mainframe is emerging as a cognitive machine, and IBM is only making its cognitive capabilities available on premises for the z System. Any other platform has to access IBM’s cognitive capabilities in the cloud.”

So what does machine learning on the mainframe look like? Read our eBook Mainframe Meets Machine Learning to understand how advances in machine learning have started and will continue to strengthen mainframe security and power the automation of mainframe operations.

In our follow up eBook Mainframe and Machine Learning for IT Service Intelligence, we review at how an ITSI solution with machine learning capabilities can provide a comprehensive view of your organization’s service delivery, allowing you to effectively set SLAs, identify potential problems, and plan for changes in the IT environment.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Unbounce launches AI-powered landing page analyzer after 8 years of training

Trying to work out how to improve your landing page’s conversion rate is difficult at best. While a select few experts use a data-only approach, the process is fairly subjective. Like being an art critic. Or estimating the chances of Donald Trump’s impeachment.

Today, Unbounce — the landing page company — has released Landing Page Analyzer, a free tool that uses eight years of training data combined with artificial intelligence to tell you how well any landing page is performing, and why.

What Unbounce’s analyzer measures, and what it doesn’t, is of particular interest. On seeing the tool in action, SEO expert and Moz founder Rand Fishkin commented, “I’ve never seen a page analysis tool that’s focused on optimization. This can be hugely helpful for folks who want to quickly check that they’ve nailed the basics of landing page optimization and accessibility.”

Unbounce has previously published a report that summarizes the 75 million pieces of data it has been collecting on landing pages, noting what works and what doesn’t (we discussed this during an episode of VB Engage). Now it has released a tool to provide live feedback, rather than expecting marketers to read through that report, understand it, and apply the lessons to their landing pages.

The Analyzer measures nine distinct performance categories. For example, it takes into account page speed, which has become increasingly important thanks to Google’s algorithm changes over the years. It measures performance rankings and compares them to industry benchmarks so that you can see how you stack up against your peers. Mobile experience is important, especially since the introduction of the mobile index. The Analyzer also takes into account copy analysis, overall design, SEO factors, trust and security, message matching (checking if the message is in line with your general theme), and social tags.

“When it comes to the AI, we know due to our research that certain copy lengths matter a lot,” Carl Schmidt, CTO at Unbounce, told me. “When we say that your copy is too long, there’s a high probability that our advice is going to bear out.”

Any website can be run through the tool, including that of your competition. Determining what you’re doing well, and what ranking other websites receive, is easy. You simply enter a URL, wait a few seconds, and the results appear. That simplicity masks a much more complex process behind the scenes, where AI steps in to determine what’s good, what’s bad, and what’s ugly about the website you’re analyzing.

The tool doesn’t yet perform complete natural language processing, but it understands how particular words affect conversion. That means that the tool won’t right now pick up on the subtleties of language, such as when you’re promoting scarcity — a tactic that definitely moves the needle.

“We’re not constructing sentences for you or doing any grammatical analysis right now,” Schmidt said. “That will be the next step. We are, however, performing some sentiment analysis to help with copy recommendations.”

The tool itself is free of charge and available from Unbounce’s website today — Unbounce will ask for your email address in return for using the product. The data that the company has been collecting on its own platform for the past eight years is feeding the current iteration, and the more people that use it, the more it will learn from those pages to improve the recommendations.

Let’s block ads! (Why?)

Big Data – VentureBeat

Microsoft updates Cosmos DB with Cassandra support, better availability guarantees

 Microsoft updates Cosmos DB with Cassandra support, better availability guarantees

Cosmos DB, Microsoft’s managed database cloud service, received several updates today aimed at making it useful for a wider variety of users. One key addition is a preview of support for using the Cassandra NoSQL database API to run operations inside the system.

This provides another tool for categorizing and analyzing data inside Cosmos DB, which already supports Gremlin, MongoDB, and SQL. Microsoft also announced the general availability of an Apache Spark Connector that allows businesses to run real-time data analysis tasks over data stored in Cosmos DB.

In addition, Microsoft is increasing the strength of the guarantees it makes for Cosmos DB’s availability. The company said that it will ensure 99.999 percent read availability for databases stored with Cosmos across multiple regions. That’s up from a 99.99 percent guarantee and comes alongside a trio of other guarantees around latency, throughput, and consistency.

Cosmos DB is designed to provide customers with a globally distributed database that they can use to power applications without having to manage the complexity of maintaining copies of data in multiple disparate locations.

Fully managed, globally distributed databases are services that many cloud providers leaned into as a key tool for attracting business customers. They highlight some of the key promises of the cloud, with support for automatic scalability and the ability to reduce the workload placed on operations engineers.

The news comes a day after Google announced the general availability of its Cloud Spanner database service, which also guarantees 99.999 percent uptime.

While the two systems are both fully managed databases with global availability and multi-region replication at their core, they approach the problem in different ways. Cosmos DB is oriented around a NoSQL approach, while Cloud Spanner is built to behave like a traditional relational database.

One of Cosmos DB’s key features is the ability for users to select their preferred consistency model. That’s based on CAP Theorem, which argues that databases can guarantee two of three traits: consistency, availability, and partition tolerance. If developers don’t need perfect consistency, they can get additional benefits when it comes to availability and performance, and Cosmos DB provides them with the tools to make those choices.

According to Azure CTO Mark Russinovich, the most popular consistency model is Session Consistency, which guarantees that a particular user of a database application will have all of their reads and writes be internally consistent, though data from other users may not match the same criteria.

This news is part of a raft of other announcements that Microsoft released as part of its Connect conference in New York today. The company revealed that it’s joining the open source MariaDB Foundation to support the development of that database software, as well as its use on Azure.

Microsoft also launched an Azure Databricks service that’s designed to make it easier for developers and data scientists to collaborate on real-time analytics using a cloud-hosted platform based on the Apache Spark project. That service also features native integration with Cosmos DB.

In addition, Microsoft unveiled new tools to improve developer workflows, make it easier to build AI systems, and more.

Correction 7:50 a.m. Pacific: This article previously said that the third trait in CAP Theorem is performance. It is not. The third trait is partition tolerance. 

Let’s block ads! (Why?)

Big Data – VentureBeat

Qubole raises $25 million for data analysis service

 Qubole raises $25 million for data analysis service

Qubole, which provides software to automate and simplify data analytics, announced today that it has raised $ 25 million in a round co-led by Singtel Innov8 and Harmony Partners. Existing investors Charles River Ventures (CRV), Lightspeed Venture Partners, Norwest Venture Partners, and Institutional Venture Partners (IVP) also joined.

Founded in 2011, the Santa Clara, California-based startup provides the infrastructure to process and analyze data more easily.

It’s possible for companies to store large amounts of information in public clouds without building their own datacenters. But they still need to process and analyze the data, which is where Qubole comes in.

“Many companies struggle with creating data lakes,” CEO Ashish Thusoo noted. His solution is providing a cloud-based infrastructure to break the raw data down without having to break it into silos.

The chief executive is well-versed in the matter, as he led a team of engineers at Facebook that focused on data infrastructure. “It gave us a front row seat of how modern enterprises should be using data,” he said.

Qubole claims to be processing nearly an exabyte of data in the cloud per month for more than 200 enterprises, which include Autodesk, Lyft, Samsung, and Under Armour. In the case of Lyft, the ride-sharing company uses Qubole to process and analyze its data for route optimization, matching drivers with customers faster.

Qubole offers a platform as a service (PaaS) that currently runs on Amazon Web Services (AWS), Microsoft Azure, and Oracle Cloud. “Google is something we’re looking at,” said Thusoo.

He said the biggest competitors in the sector include AWS, Cloudera, and Databricks, which recently closed a $ 140 million round of funding.

To date, Qubole has raised a total of $ 75 million. It plans on using the new money to further develop its product, increase sales and marketing efforts, and expand in the Asia Pacific (APAC) region.

“There is a significant opportunity for big data in the Asia Pacific region,” said Punit Chiniwalla, senior director at Singtel Innov8, in a statement.

Qubole currently employs 240 people across its offices in California, India, and Singapore.

Sign up for Funding Daily: Get the latest news in your inbox every weekday.

Let’s block ads! (Why?)

Big Data – VentureBeat

Reddit CEO Steve Huffman on the site’s redesign, coming in Q1 2018

At Web Summit this week in Lisbon, Portugal, Reddit CEO Steve Huffman talked about the site’s upcoming redesign. Reddit launched in June 2005, and its look and feel has changed little over the past decade — certainly not at all in the past four or five years. We sat down with Huffman to dive a little deeper into the redesign and why he deems it critical to the site he cofounded.

In January, Huffman announced the plan. In July, Reddit raised $ 200 million (at a valuation of $ 1.8 billion) to redesign its website. The company started 2017 with some 140 employees. It now employs 250. Huffman believes Reddit needs to “catch up” in terms of look and feel before it can start truly innovating.

Reddit is also working on a slew of other projects to improve the site. The company is tweaking the algorithm to improve relevancy based on time spent in individual subreddits, running various experiments to improve onboarding of new users, building new content creation tools, improving moderation tools, tweaking what and how you can post on your profile, creating an event content type for pre/during/post coverage, improving content relevance for users abroad (50 percent of Reddit users reside in the U.S.), and exploring how to proactively grow internationally (80 percent of Reddit is in English).

But all of this pales in comparison to the upcoming redesign.

The decision

Every so often, someone on the web asks why so many poorly designed sites (Reddit, Hacker News, 4chan, to name a few) are so incredibly popular. The answer is hotly debated but always boils down to: Content is king. The one thing these sites do well is get out of the way and let the content do the talking.

Huffman explained that one of the best pieces of advice he ever received was a general business statement about restaurants: “People will put up with long lines, expensive prices, and bad service. They will eat at your restaurant if the food is good. And that’s the way I’ve always thought about Reddit. Our content is our food, and as long as the food is good, nothing else matters. That’s why Reddit — despite the product that doesn’t change, despite management that’s nonexistent or not very good, just being murdered in the press for five years, and an onboarding process that’s hostile, would be a generous way of describing it — continues to grow. If the content is good, if your business is producing real value, that can make up for a lot of other sins.”

Nonetheless, Huffman believes a redesign is necessary for Reddit to progress.

“Reddit did not succeed because it has a shitty UI,” Huffman said. “Reddit succeeded despite having a shitty UI. Reddit is succeeding because we have great content. We have people sharing themselves, helping each other, and creating all sorts of wonderful things. That core mechanic is what makes Reddit work, and that’s not going away.”

When asked about the dos and don’ts of redesigning your site, Huffman said the two were closely related. He offered two lessons for those considering a redesign. “Do: Make sure you’re trying to solve real problems. Don’t: Just create busy work for yourself.”

That’s all fine and dandy, but I still didn’t understand what is driving this decision. Huffman offered a few reasons, but the biggest motivation is the potential for huge growth.

“Desktop is no longer our largest platform,” Huffman revealed. “Reddit is actually majority mobile now, if you include mobile web. Our native apps, released about a year and a half ago, are still a minority of our daily active users, but are nearly 50 percent of our pageviews. The engagement in our native apps is very, very high compared to every other platform. Pretty much like 3x to 5x in every dimension. And part of that is because it’s a phone — it’s always on you and always accessible — and part of that is because the UI is modern, is a lot better. But the content is the same. That’s what makes me really excited about the redesign because the content is not changing, but the appearance of it is.”

Reddit does not have a traffic problem. Reddit has a conversion problem. “It’s an opportunity most companies don’t have. It’s a problem most companies don’t have. I guess I take the optimistic view,” Huffman conceded.

(Reddit doesn’t disclose how many daily active users it has, but monthly active users are in the hundreds of millions. Huffman suggested hitting 1 billion monthly active users is not out of the question.)

Reddit gets 3 million to 9 million new users every day that it does not capture, Huffman told VentureBeat. The spread of 6 million users is likely lurkers and incognito users that don’t want to be captured. But either way, every day, a few million new users arrive and never come back.

Huffman strongly believes the drop-off is due to the site’s design. “New user behavior is different from lurkers. For example, new users never go to communities because they don’t know communities exist. They don’t read comments because they don’t know comments exist. There’s a cohort of new users that are lost. I fundamentally believe that Reddit has something for everybody. If we get the presentation right and the experience right, the users will sign up. I make this claim that Reddit has something for everybody, but if you go the frontpage, it’s hard for somebody to find their home.”

Wooing new users aside, there’s also a huge amount of technical debt that the team wants to get rid of. The hope is that the redesign will give the team a platform upon which it can iterate quickly.

“The code that Reddit is running on right now, that generates our website, a lot of that is code that I wrote about 10 years ago,” Huffman told VentureBeat. “And then you’ve had dozens of developers come through there. It’s really hard to work on the codebase. That will be a nice improvement; we’ll just be able to move faster.”

And of course, the redesign will enable Reddit to more easily sell what advertisers are already buying. Reddit’s most popular ad is text and native to the site experience. Native is a good thing in terms of user experience, but it also means the Reddit sales team has to build every ad from scratch with every new advertiser. That’s difficult to scale if Reddit wants to accelerate its growth.

So the redesign is supposed to achieve a few things: boost engagement, improve the experience for existing users, get new users to join, revamp the code base, and bring in more revenue. Oh, and hopefully don’t piss off the power users.

For those wondering what the redesign means for the Reddit Enhancement Suite (RES), Huffman wants to assuage their fears. RES should continue to work just fine.

“The lead maintainer of RES works at Reddit. RES is used by a lot of our power users, so RES was kind of our starting point. At Reddit, as an employee, you’re not allowed to use RES. For two reasons. One, our product needs to be good enough that you don’t need it, and so like do your job and make sure it’s good enough. And two, for security reasons. So yeah, RES was our starting point.”

What about the users who already hate the redesign?

“I know there are people who are just going to be stubborn about it, and I think we just have to accept that,” Huffman said. “We’re going to try to be as respectful as possible. Our purpose is to make the site better for them. Sometimes users don’t always see that or don’t always agree. We’ll keep iterating, and I’m very confident we’ll get there in time.”

The redesign

Huffman also talked about polish, noting that on a macro level, the redesign is already complete — it’s functional, but there are still many little friction points that can be really painful. He cited the infamous estimate that you spend 80 percent of your time doing the last 20 percent of work. “And that’s where we are right now. We’re trying to make sure the details are right, because those details are very important.”

The redesign consists of three views: card view, classic view, and compact view.

The card view behaves differently depending on the content type. But in short, it requires less clicking as images are already expanded, videos play in-line, and so on.

The card view in its current iteration looks like this:

Unsurprisingly, early testing shows new users prefer the card view and old users prefer the classic view. “I think there’s always going to be that tension, and I think that’s OK, though,” Huffman said. He wants to improve the card view to a point where existing users love it as well.

Huffman also addressed criticism that the card view looks like another popular site.

“When people say it looks too much like Facebook, I think that’s a lazy criticism. Reddit is a list of things. To the extent that our site is a list of things, and many websites are lists of things, it’s not surprising that we look similar. But what really matters is the substance of the content and who is behind it, and that’s what makes us special.”

And for the record, the card view won’t necessarily be made the default for everyone. That hasn’t been decided yet.

“I wouldn’t rule out making, for example, [classic] the default for power users. We don’t have to build a perfect product that works the same for everybody,” he said.

This is the classic view, which is almost identical to how Reddit appears right now:

This view isn’t going away, at least not anytime soon. “It doesn’t really cost us to run the old site. So we’ll run that for as long as we need to. Indefinitely if we have to,” Huffman said. “It would not be my preference, just from a focus point of view. But we know people have tools built around that site, have plugins and workflows that are really important. I’m not going to rush to turn it off, but I am going to rush to turn the other one on.”

I pressed Huffman on whether the classic view will still be updated. He said that yes, it will not be shelved.

And finally, the compact view:

This one is clearly for power users, even if it might seem a bit ridiculous at first glance. There’s something about being able to see everything on one page that many geeks like yours truly gravitate towards.

While these three views have been set, there are still many aspects that haven’t been finalized. The biggest is around text posts, which account for 60 percent of Reddit content.

“We’re dramatically improving text posts, giving people a rich text editor,” Huffman said. “So instead of just having a paragraph and a couple of links, you can make a full blog post if you want. I actually think text posts are going to get a lot nicer and easier to do. The new editor is probably my favorite part of the redesign.”

As a Reddit user, that worries me almost as much as how the defaults will work for the various groups of users. Formatting text is fairly easy to abuse, and that can hurt the experience more than a redesign can improve it.

As for how text is going to appear on the frontpage, Huffman says there are various listing formats being tested, including ones that show just the title, a snippet, and the full post. The team will be gathering data on what users like best.

The rollout

The rollout strategy is a gradual one that Huffman described as a series of switches. “But no, we’re not just going to flip a switch. Honestly, I think that would be suicide. We’ve seen it before.”  The main reason for this is simple: Reddit doesn’t want to piss off its users, à la Digg.

Right now, the redesign is in an invite-only stage. Some Reddit users are already trying it out and providing feedback. The company even has some users come into its office and use the site as the team watches.

In December, all Reddit users will be able to opt in and try out the redesign. In Q1 2018 (the goal was previously Q4 2017), Huffman hopes to turn on the redesign for all users, by default.

Even then, there will still be one more switch to flip, Huffman says: setting what first-time users see when they visit the homepage. What that will look like is still up in the air.

Most importantly, though, users will still be able to switch between the three views: card, classic, and compact.

“I hope that helps soften the blow,” Huffman told VentureBeat.

Let’s block ads! (Why?)

Big Data – VentureBeat

New White Paper! Mainframe Data as a Service: Setting Your Most Valuable Data Free

Enterprises across the globe still run critical processes with mainframes. Estimates range as high as 80% of transactional corporation data is in mainframe systems. Unfortunately, in most cases the mainframe data in these enterprise organizations remains inaccessible to new data analytics tools that are emerging on a variety of open source and cloud platforms, including Hadoop and Spark.

There is a tremendous opportunity to surface mainframe data into this new world of fast-moving open technology. Organizations that have freed up mainframe data for cross-enterprise consumption are achieving greater agility, flexibility and lower costs.

Our latest white paper, Mainframe Data as a Service: Setting Your Most Valuable Data Free, dives deeper into the complex challenge of connecting Big Iron to Big Data and explores how extending Data as a Service (DaaS) to mainframe data opens up a large and often impenetrable source of valuable corporate information.

blog banner whitepaper MF DaaS New White Paper! Mainframe Data as a Service: Setting Your Most Valuable Data Free

Download the white paper now!

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

12 Years in a Row…We Are a Leader in Gartner’s Data Quality Magic Quadrant!

Last week, Gartner Group published their Data Quality Magic Quadrant. I am very proud to say Syncsort’s Trillium data quality software is again a leader for the 12th year in a row, every year since Gartner has been publishing their Data Quality Magic Quadrant! We announced the recognition by Gartner today.

As you know, Syncsort acquired Trillium Software late last year. Since being part of Syncsort, my alma mater, we’ve been able to continue advancing our data quality portfolio of products with announcements such as Trillium Precise and Trillium Quality for Big Data (more on this below).

blog banner Data Quality Magic Quadrant 12 Years in a Row…We Are a Leader in Gartner’s Data Quality Magic Quadrant!

Creating a Single View of Customers

One of our customers’ most popular use cases is creating a single view of their customers. The software supports any entity type, not just customer, such as product, supplier, etc., and we have customers using our products to take advantage of that flexibility. Customer is definitely the most prevalent.

The Data Quality Magic Quadrant report recognizes the strength and stability of Syncsort’s Trillium Software System (TSS) and Global Locator. Our product simply works, out of the box, and is reliable for our customers. I met with a customer just recently who said to us that Trillium is very stable and is relied upon by over 1,500 analysts doing customer analytics every week.

blog 360 degrees 2 300x300 12 Years in a Row…We Are a Leader in Gartner’s Data Quality Magic Quadrant!

We ensure single view, and also validate & enrich data such as world-wide postal addresses (no need to go to many third-parties to get the data), geocoding (latitude/longitude) and now with Trillium Precise we can validate email and phone numbers as well.

For more information around the customer 360 use case, check out our eBook: Getting Closer to Your Customers in a Big Data World

Deploying Data Quality across Big Data

In September, during the Strata Data Conference in New York City, we announced Trillium Quality for Big Data. With this product, new and existing customers can take their batch projects and now deploy them seamlessly into Big Data environments. Leveraging Syncsort’s Intelligent Execution technology, Trillium Quality for Big Data can be deployed seamlessly to a single server, Hadoop MapReduce or Spark.

Aligning Data Quality to Data Governance

Finally, our Trillium Discovery Center is getting rave reviews from our customers allowing them to deploy the data profiling to non-technical users in the business. Our other most popular use case is around data governance. With our business rules, users can now create the technical implementation of data quality and policies as defined in partner tools such as Collibra Data Governance Center and ASG Enterprise Data Intelligence. Keep an eye out in the coming weeks for more out of the box integration with our partners’ products.

Oh and by the way, the Trillium Discovery Center is completely browser-based so it’s cloud-ready, another area where we are seeing tremendous growth and received credit from Gartner.

I am very proud of Syncsort and our data quality software employees, customers and partners. I congratulate everyone for the hard work and the well-deserved recognition as a leader in the data quality market.

Download theGartner Magic Quadrant Report to learn how leading solutions can help you achieve your long-term data quality objectives.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog