Category Archives: Big Data

New eBook! Keep Your Data Lake Pristine with Big Data Quality Tools

The term Big Data doesn’t seem quite “big enough” anymore to properly describe the vast over-abundance of data available to organizations today. Business leaders have repeatedly expressed little confidence in the reliability of their Data Lake, especially as the volume and variety of Big Data sources continue to grow.

The very purpose of that data is to enable new levels of business insight and clarity. No one sets out to create a data swamp that provides nothing but confusion and distrust.

Our new eBook, Keep Your Data Lake Pristine with Big Data Quality Tools, takes a look at how the proper software can help align people, process, and technology to ensure trusted, high-quality Big Data.

Download the eBook to see how data quality can yield new insights for business success.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

ProBeat: Google’s Gboard is helping me relearn languages

I’m trilingual, or at least that’s what I’ve been telling everyone for the past couple of decades. The truth is that ever since I graduated high school, my French has all but completely deteriorated. My Polish is a bit better, but only because I still use it to communicate with my parents — it has also fallen by the wayside. Enter Gboard.

The poorly named Google app (which replaced Google Keyboard in December 2016) is a virtual keyboard for Android and iOS devices. It has a bunch of built-in features, including Google Search results, predictive answers, GIFs, emojis, voice dictation, and so on.

But the one that I’ve embraced is multilingual language support. Gboard’s predictive typing engine, which uses machine learning to suggest the next word using context, is already good. If you add multiple languages in the app’s settings, however, Gboard becomes wonderful.

The killer feature, if you will, is that you don’t have to switch between languages when you’re typing. Gboard simply detects the tongue you’re writing in and offers suggestions in that language. And because this is a virtual keyboard app, all the functionality works everywhere you type on your device — all first-party and third-party apps that have any sort of input field.

As you can see above, I have English, French, and Polish set up in Gboard. When I communicate in English, it works just like any other virtual keyborad. When I crack a rare joke in French, it corrects me as I write it out. When I’m typing to my parents, it offers suggestions in Polish — accents, correct conjugation, and all.

The beautiful part is that I’m learning from these suggestions. There is a ton of nuance in Polish, from multiple letters that sound identical when pronounced, to completely different letters that sound oh-so-similar. In French, verbs are conjugated. In Polish, every single word can be conjugated. When I’m typing and trying to sound out certain words, Gboard often helps me figure out how a given word is spelled.

Sometimes it’s a quick fix and I simply pick the top suggestion (“yeah, of course, I knew that!”) while other times it takes some trial and error for me to get close enough for Gboard to be able to correct me (“oh. OH!”). But in all cases, I learn the correct spelling, and thus the correct pronunciation.

Gboard isn’t perfect. Sometimes it throws in random suggestions from a different language, just because. But that’s a small price to pay to be able to accurately type in three languages.

Whether you’re fluent in more than one language or are just learning an additional tongue, I highly recommend this method. Of course, it doesn’t have to be Gboard. Just find a keyboard app that has multilingual language support for the languages you’re trying to learn (or re-learn), and off you go.

Let your keyboard do the teaching.

ProBeat is a column in which Emil rants about whatever crosses him that week.

Let’s block ads! (Why?)

Big Data – VentureBeat

Capacity Management 101: Best Practices to Align IT Resources with Business Goals

Syncsort’s recent acquisition of Metron has put a spotlight on Capacity Management and the capabilities provided by the athene® software solution. But what, exactly, is Capacity Management, and why is it important?

What is Capacity Management?

The primary goal of Capacity Management is to ensure that IT resources are rightsized to meet current and future business requirements in a cost-effective manner. One of the more common definitions of Capacity Management is provided for in the ITIL framework and further divides the process into three sub-processes: Business Capacity Management, Service Capacity Management, and Component Capacity Management.

Top-Down, Bottom-Up Approach

When teaching people in a practitioner-level course, we typically teach the three sub-processes in a “top-down, bottom-up” approach. What does that mean?

  • Top-Down: Business needs drive the creation of services, which leads to the purchase of components that have the computing power and other resources that make Information Technology solutions a reality at their company.
  • Bottom-Up: When monitoring and analyzing the infrastructure, start with the components. Ensure each of these are right-sized and appropriate for the job. They underpin services – are those meeting SLAs? The services keep the business running – are the forecasts accurate and do the services and components have to be upgraded or further rightsized to optimize IT spend?

Conceptually, it sounds pretty straight-forward, but exactly how are these concepts put into practice in a modern data center?

blog banner webcast CM for the mainframe Capacity Management 101: Best Practices to Align IT Resources with Business Goals

5 Components of Capacity Management

The activities that support the Capacity Management process are crucial to the success and maturity of the process. Some of these are done on an ongoing basis, some daily, some weekly, and some at a longer, regular interval. Some are ad-hoc, based on current (or future) needs or requirements. Let’s look at those:

1. Monitoring

Keeping an eye on the performance and throughput or load on a server, cluster, or data center is extremely important. Not having enough headroom can cause performance issues. Having too much headroom can create larger-than-necessary bills for hardware, software, power, etc.

2. Analysis

Taking that measurement and monitoring data and drilling down to see the potential impact of changes in demand. As more and more data become available, having the tools needed to find the right data and make sense of it is very important.

3. Tuning

Determining the most efficient use of existing infrastructure should not be taken lightly. A lot of organizations have over-configured significant parts of the environment while under-configuring others. Simply reallocating resources could improve performance while keeping spend at current levels.

4. Demand Management

Understanding the relationship of current and future demand and how the existing (or new) infrastructure can handle this is incredibly important. Predictive analytics can provide decision support to IT management. Also, moving non-critical workloads to quieter periods can delay purchase of additional hardware (and all the licenses and other costs that go with it).

5. Capacity Planning

Determining the requirements for resources required over some future time. This can be done by predictive analysis, modeling, benchmarking, or other techniques – all of which have varying costs and levels of effectiveness.

blog capacity management 101 Capacity Management 101: Best Practices to Align IT Resources with Business Goals

Capacity Management Information System

The centerpiece of a mature and effective Capacity Management process is the Capacity Management Information System, or CMIS.

The CMIS allows for easy access to Capacity and Performance data for reporting, analysis, predictive modeling and trending, troubleshooting (Incident and Problem Management).

For over 30 years, athene® has been a leading solution for implementing, automating, and managing a mature cross-platform Capacity Management process. Syncsort’s acquisition of Metron ensures that athene® will continue to provide for legacy platforms, such as System z and IBM i as well as provide valuable support for newer technologies and platforms, including Cloud environments.

One way for organizations to evaluate their Capacity Management process is to complete our Maturity Survey. Answer 20 quick questions about your organization and its processes, and you’ll immediately receive an initial maturity level as well as a comprehensive report with suggestions on how to improve your maturity.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Expert Interview (Part 2): James Kobielus on Reasons for Data Scientist Insomnia including Neural Network Development Challenges

In the first half of our two-part conversation with Wikibon lead analyst James Kobielus (@jameskobielus), he discussed the incredible impact of machine learning in helping organizations make better business decisions and be more productive. In today’s Part 2, he addresses what aspects of machine learning should be keeping data scientists up at night. (Hint: neural networks)

Several Challenges Involved with Developing Neural Networks

Developing these algorithms is not without its challenges, Kobielus says.

The first major challenge is finding data.

Algorithms can’t do magic unless they’ve been “trained.” And in order to train them, the algorithms require fresh data. But acquiring this training data set is a big hurdle for developers.

For eCommerce sites, this is less of a problem – they have their own data in the form of transaction histories, site visits and customer information that can be used to train the model and determine how predictive it is.

blog banner 2018 Big Data Trends eBook Expert Interview (Part 2): James Kobielus on Reasons for Data Scientist Insomnia including Neural Network Development Challenges

But the process of amassing those training data sets when you don’t have data is trickier – developers have to rely upon commercial data sets that they’ve purchased or open source data sets.

After getting the training data, which might come from a dozen different sources, the next challenge is aggregating it so the data can be harmonized with a common set of variables. Another challenge is having the ability to cleanse data to make sure it’s free of contradictions and inconsistencies. All this takes time and resources in the form of databases, storage, processing and data engineers. This process is expensive but essential. (For more on this, read Uniting Data Quality and Data Integration)

Third, organizations need data scientists, who are expensive resources. They need to find enough people to manage the whole process – from building to training to evaluating to governing.

“Finding the right people with the right skills, recruiting the right people is absolutely essential,” Kobielus says.

Before jumping into machine learning, organizations should also make sure it makes sense for your business strategies.

Industries like finance and marketing have made a clear case for themselves in implementing Big Data. In the case of finance, it allows them to do high-level analysis to detect things like fraud. And in marketing, for instance, CMOs, found it useful to develop algorithms that allowed them  to conduct sentiment analysis on social media.

There are a lot of uses for it to be sure, Kobielus says, but there are methods for deriving insights from data that don’t involve neural networks. It’s up to the business to determine whether using neural networks is overkill for their purposes.

“It’s not the only way to skin these cats,” he says.

If you already have the tools in place, then it probably makes sense to keep using them. Or, if you find traditional tools can’t address needs like transcription or facial recognition, then it probably makes sense to go to a newer form of machine learning.

What Should Really Be Keeping Data Scientists Up at Night 

While those in the tech industry might be fretting over whether AI will displace the gainfully employed or that there’s a skills deficit in the field, Kobielus has other worries related to data science.

For one, the algorithms used for machine learning and AI are really complex and they drive so many decisions and processes in our lives.

“What if something goes wrong? What if a self-driving vehicle crashes? What if the algorithm does something nefarious in your bank account? How can society mitigate the risks,” Kobielus asks.

When there’s a negative outcome, the question asked is who’s responsible. The person who wrote the algorithm? The data engineer? The business analyst who defined the features?

These are the questions that should keep data scientists, businesses, and lawyers up at night. And the answers aren’t clear-cut.

In order to start answering some of these questions, there needs to be algorithmic transparency, so that there can be algorithmic accountability.

Ultimately, everyone is responsible for the outcome.

There’s a huge legal gray area when it comes to machine learning because the models used are probabilistic and you can’t predict every single execution path for a given probabilistic application built on ML.

blog kobielus quote3 Expert Interview (Part 2): James Kobielus on Reasons for Data Scientist Insomnia including Neural Network Development Challenges

“There’s a limit beyond which you can anticipate the particular action of a particular algorithm at a particular time,” Kobielus says.

For algorithmic accountability, there need to be audit trails. But an audit log for any given application has the potential to be larger than all the databases on Earth. Not just that, but how would you roll it up into a coherent narrative to hand to a jury?

“Algorithmic accountability should keep people up at night,” he says.

Just as he said concerns about automation are overblown, Kobielus says it’s also unnecessary to worry that there aren’t enough skilled data scientists working today.

Data science is getting easier.

Back in the 80s, developers had to know underlying protocols like HTTP, but today nobody needs to worry about the protocol plumbing anymore. It will be the same for machine learning, Kobielus says. Increasingly, the underlying data is being abstracted away by higher-level tools that are more user friendly.

“More and more, these things can be done by average knowledge workers, and it will be executed by underlying structure,” he says.

Does Kobielus worry about the job security of data scientists then? Not really. He believes data science automation tools will allow data scientists to do less with more and hopefully to allow them to develop their skills in more challenging and creative realms.

For 5 key trends to watch for in the next 12 months, check out our new report: 2018 Big Data Trends: Liberate, Integrate & Trust

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Expert Interview (Part 1): Wikibon’s James Kobielus Discusses the Explosive Impact of Machine Learning

It’s hard to mention the topics of automation, artificial intelligence or machine learning without various parties speculating that technology will soon throw everybody out of their jobs. But James Kobielus (@jameskobielus) sees the whole mass unemployment scenario as overblown.

The Future of AI: Kobielus Sees Progress Over Fear

Sure, AI is automating a lot of knowledge-based and not-so-knowledge-based functions right now. It is causing dislocations in our work and in our world. But the way Kobielus looks at it, AI is not only automating human processes, it’s augmenting human capabilities.

“We make better decisions, we can be more productive … We’re empowering human beings to do far more with less time,” he says. “If fewer people are needed for things we took for granted, that trend is going to continue.”

It’s anybody’s guess how the world will look in the future, Kobielus says. But he doesn’t believe in the nightmare scenarios in which AI puts everyone out of a job. Why? Basic economics.

The industries that are deploying AI won’t have the ability to get customers if everyone is out of a job.

“There needs to be buying power in order to power any economy, otherwise the AI gravy train will stop,” he says.

blog kobielus quote2 Expert Interview (Part 1): Wikibon’s James Kobielus Discusses the Explosive Impact of Machine Learning

Kobielus is the lead analyst with Wikibon, which offers market research, webinars and consulting to clients looking for guidance on technology. His career in IT spans more than three decades and three-quarters of it has been in analyst roles for different firms. Before going to Wikibon, he spent five years at IBM as a data science evangelist in a thought leadership marketing position espousing all things Big Data and data science.

He talks regularly on issues surrounding Big Data, artificial intelligence, machine learning and deep learning.

How Machine Learning is Impacting Industry Today

Machine learning is a term that’s been around for a while now, Kobielus says. At its core, it’s simply using algorithms and analytics to find patterns in data that you wouldn’t have been able to find otherwise. Regression models and vector machines are examples of more established forms of machine learning. Today, newer crops of algorithms are lumped under what are called neural networks or recurrent neural networks.

“That’s what people think of as machine learning – it’s at the heart of industry now,” Kobielus says.

Brands are using these neural network tools for face and voice recognition, natural language processing and speech recognition.

Applied to text-based datasets, machine learning is often used to identify concepts and entities so that they can be distilled algorithmically to determine people’s intentions or sentiments.

blog banner 2018 Big Data Trends eBook Expert Interview (Part 1): Wikibon’s James Kobielus Discusses the Explosive Impact of Machine Learning

“More and more of what we see in the machine learning space is neural networks that are deeper,” Kobielus says. “[They’re] not just identifying a face, but identifying a specific face and identifying the mood and context of situation.”

They’re operating at much higher levels of sophistication.

And rather than just being used in a mainframe, more often these algorithms are embedded in chips that are being put into phones, smart cars and other “smart” technologies.

Consumers are using these technologies daily when they unlock their phones using facial recognition, ask questions to tools like Alexa or automatically tag their friends on Facebook photos.

More and more industries are embracing deep learning – machine learning that is able to process media objects like audio and video in real time, offering automated transcription, speech to text, facial recognition, for instance. Or, the ability to infer the intent of a user from their gesture or their words.

Beyond just translating or offering automated transcriptions, machine learning provides a real-time map of all the people and places being mentioned and shares how they relate to each other.

Looking at the internet of things market, anybody in the consumer space that wants to build a smart product is embedding deep learning capabilities right now.

Top Examples of Machine Learning: Self-Driving Cars and Translations

Kobielus points to self-driving vehicles as a prime example of how machine learning is being used.

“They would be nothing if it weren’t for machine learning – that’s their brains.”

Self-driving vehicles process a huge variety of input including images, sonar, proximity, and speed as well as the behavior of the people inside– inferring their intent, where they want to go, what alternative routes might be acceptable based on voice, gestures, their history of past travel and more.

Kobielus is also excited about advances in translation services made possible by machine learning.

“Amazon Translate, human translation between human languages in real-time, is becoming scary accurate, almost more accurate than human translation,” Kobielus says.

In the not-too-distant future, he predicts that people will be able to just wear an earpiece that will translate a foreign language in real-time so they will be able to understand what people are saying around them enough to at least get by, if not more.

“The perfect storm of technical advances are coming together to make it available to everybody at a low cost,” he says.

Learn more about the top Big Data trends for 2018 in Syncsort’s eBook based on their annual Big Data survey.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Transform customer engagement with location intelligence (VB Live)

 Transform customer engagement with location intelligence (VB Live)

Location intelligence can help brands deliver dynamic user experiences, better understand their customers and prospects, and boost consumer engagement and delight. Join this VB Live event to earn how to effectively incorporate location intelligence into your digital strategies and transform your customer relationships.

Register here for free. 


Location intelligence has evolved. It’s not just longitude and latitude anymore — it’s context, says David Bairstow, VP of product at Skyhook. It’s the intersections of billions and billions of mobile devices processed against known locations — from airports, sports stadiums, and college campuses to coffee shops and Burger King — and it means a revolution at the intersection between customer intelligence and mobile advertising targeting.

Brands know more about their consumers than anybody else. They know when consumers are on their properties, whether it’s at their physical location or in their online store — but that represents just a fraction of that person’s day.

“Location data helps brands understand their customers when they’re not spending time with them,” Bairstow says. “If Burger King is the brand, they know a particular customer likes Whoppers or Quarter Pounders with cheese, but does that customer also spend a lot of time at Chik-fil-A?”

The technology has become sophisticated enough to deliver the kind of precision required to detect that not only is a customer nearby, but that they’ve actually pulled into a gas station, and they’ve stopped — a perfect scenario in which to deliver a targeted, engaging message. And these kind of marketing and advertising scenarios have always been the promise of location intelligence. But that sophistication also means that marketers can leverage this new facet of customer data by building it into other channels as well.

“Just because it’s coming from mobile location data, doesn’t mean that the only kind of delivery channel is a realtime location-based trigger for a message,” he explains. “If you can learn a lot more about your customers by understanding who they are, what their preferences are, where they go, where they spend their time, then you can build it into a broader, smarter marketing campaign.”

He offers Skyhook’s recent study for a high-end clothing brand, identifying mobile devices as a proxy for people who had visited their stores and analyzing the captive audience data, exploring the common behaviors and traits among all those whoh have visited this store.

When they compared the behaviors of the group that visited the target stores to the broader panel of 50 million devices, they found that these shoppers over-indexed dramatically, Bairstow says. They were like 10 times more likely to visit high-end yoga studios, high-end gyms, and the top at-leisure brands.

“It was a real epiphany for the store,” he says. “With those insights they are now experimenting with working with some of the brands their consumers associate with, as well as building real-time messaging based on when a customer is having another experience which has an association with their own brand.”

Cutting the creep factor

But how do you manage the risk of driving consumers away — or having them refuse to give up their data, and consent? You go back to the basics: delivering value, which needs to start at the beginning of customer engagement with a request for permission to use their location data.

Permission rates vary dramatically depending on the type of app and the trust that people have in a given brand. If the SPG Starwood Rewards app asked to use a customer’s location, that customer understands intuitively how that data might be used when they’re traveling. If it’s a social app or similar, where they can’t immediately see why you need their location, they’re going to say no more often than yes, Bairstow explains. They need to see an explicit value exchange — for instance, ‘I want your location so I can give you offers when you’re near one of my stores’ which provides an explicit benefit to the customer.

“I think there’s a spectrum in terms of what brands should, or need, to give to customers to make them feel comfortable with their use of location data, because ultimately it’s a value exchange,” he says. “It’s, ‘I’m going to give you a better experience because of it,’ or the big one, ‘I’m going to give you money,’ if it’s coupons and promotions.”

To learn more about the customer intelligence breakthroughs that companies like Deloitte and Skyhook are developing, and how to garner location-based insights that supercharge your CRM system and help you build a more powerful marketing plan, don’t miss this VB Live event!


Don’t miss out!

Register here for free.


During this webinar you’ll learn how to:

  • Boost engagement with real-time, location-based consumer engagement and experiences
  • Gain insight into the behavioral patterns of customers and prospects
  • Understand the future of location data for your business

Speakers:

  • David Bairstow, VP Product, Skyhook
  • Prince Nasr Harfouche, Principal, Deloitte Consulting LLP
  • Stewart Rogers, Analyst at Large, VentureBeat (Moderator)

Sponsored by Skyhook

Let’s block ads! (Why?)

Big Data – VentureBeat

New eBook! Supercharge Your CRM with Built-In Data Quality

Organizations today rely on Customer Relationship Management (CRM) systems to effectively interact and engage with customers and prospects; however, maintaining the data integrity of the underlying customer records is another story.

Syncsort’s latest eBook, Supercharge Your CRM with Built-In Data Quality, gives some insight to the challenges that organizations face when dealing with their own data.

blog banner Supercharge Your CRM New eBook! Supercharge Your CRM with Built In Data Quality

In this eBook we take a look at how poor data quality can happen in the first place, why it’s a constant problem, and why the level of data quality has such a pervasive impact on businesses. You’ll discover how embedded data quality within your CRM effectively addresses these challenges.

Download the new eBook today to see how Syncsort’s Data Quality software can ensure that your organization has clean and real-time data.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Tableau’s data visualization platform now supports Linux, promises faster operations

 Tableau’s data visualization platform now supports Linux, promises faster operations

Tableau announced today that its new Hyper data engine is generally available to customers, providing a massive speed boost for existing processes through its business intelligence and analysis software.

The company also announced the general availability of its Linux server product, which is built on top of Hyper. This will allow people to run Tableau’s system on top of the popular open source operating system — in addition to Windows — and fulfills a longstanding request from users.

These two updates are part of Tableau’s version 10.5 release of its software and come as the company faces tough competition in the business intelligence and analysis space. Tableau has to contend with new cloud services popping up from jar players in the tech market — like Microsoft and Amazon.

Hyper could provide a major boost to existing Tableau customers since it works with and speeds up the existing processes they have in place. Customers won’t have to rewrite queries in order to take advantage of the performance boost — the software is supposed to just make their existing work faster.

The Hyper engine also enables a new Viz in Tooltip feature, which lets people see visual breakdowns of data just by mousing over parts of a dashboard. That allows users to better understand the makeup of entries on a Tableau dashboard without having to write code to do so.

For example, users can now mouse over charts and see automatically generated breakdowns of the numerical data behind each field in near real time, without having to write a query or take up additional dashboard space showing that information.

On top of all those features, this release includes content governance capabilities that make it easier for customers to manage who can access different portions of the data stored inside a Tableau system.

Let’s block ads! (Why?)

Big Data – VentureBeat

Mainframe Cost Reduction Tips to Maximize Your Data Infrastructure ROI

Getting the most from your mainframe requires not just taking advantage of technologies like containers or integrating your mainframe into DevOps workflows. You also want to optimize around mainframe cost reduction. Keep reading for five tips for lowering your mainframe costs.

If you know anything about mainframes, you know they are a big investment. Prices for a new mainframe start in the tens of thousands of dollars, and can easily reach into six figures – for just a single computer.

Compare that to a commodity x86 server, which might cost you $ 2,000 at the high end, and it’s obvious that mainframes entail a pretty hefty monetary commitment.

5 Mainframe Cost Reduction Tips

With such a large commitment comes a need for cost optimization. If you want to maximize your mainframe ROI, you’ll be on the lookout for strategies that can help you reduce mainframe setup and operating costs, such as those detailed below.

mainframe Mainframe Cost Reduction Tips to Maximize Your Data Infrastructure ROI

1. Virtualize Your Mainframe Workloads

Although commodity servers have supported virtualization for a long time, mainframes continue to offer unparalleled support for virtualized workloads. IBM promises that a single mainframe can support as many as 8,000 virtual machines. You’d be lucky if you can run eighteen virtual machines on a commodity server.

When planning your mainframe architecture, strive to take full advantage of virtualization. Virtualization makes workloads more portable and scalable and helps ensure that your mainframe’s capacity is not under-utilized.

2. Consider Linux for Mainframe

One of the most powerful features of mainframes is their ability to run Linux-based software environments, as well as native mainframe environments.

Taking advantage of Linux as a host environment for some of your applications on your mainframe can significantly reduce your overall operating costs — especially because Linux on the mainframe makes it possible to move some of your applications from commodity servers onto your mainframe.

blog banner webcast mainframe optimization Mainframe Cost Reduction Tips to Maximize Your Data Infrastructure ROI

3. zIIP and zAAP Your Mainframe Workloads

One of the most significant recent advances in mainframe hardware is IBM’s release of zIIP and zAAP processors for z Systems. These are special processors that are optimized for certain kinds of workloads, such as databases and encrypting network traffic.

Because zIIP and zAAP processors are less expensive, they can help to lower your mainframe hardware acquisition costs without compromising performance. By sending appropriate workloads to zIIP and zAAP processors, and saving your more expensive mainframe processors for other types of tasks, you get more bang for your mainframe buck.

4. Modernize Your Mainframe Applications

You may be running mainframe applications that were written decades ago. Their code can be modernized to run more efficiently by taking advantage of modern mainframe hardware, running more tasks in parallel and so on.

If you’re thinking that modernizing your mainframe applications requires an expensive and time-consuming overhaul of their codebases, think again. It’s possible to refactor your mainframe software without rewriting it. In fact, some vendors even offer automated refactoring solutions targeted at mainframes.

Investing in a little application modernization can do much to make your applications run more efficiently.

savings 2789137 960 720 600x Mainframe Cost Reduction Tips to Maximize Your Data Infrastructure ROI

5. Automate, Automate, Automate!

Any task – whether it involves a mainframe or a different part of your infrastructure – that is performed manually is bound to be time-consuming and costly. That’s why automation is king when it comes to optimizing costs.

On your mainframe, opportunities to automate center on areas like data offloading and transformation, or integrating data and applications with the rest of your infrastructure.

Next Steps: Learn More About Mainframe Optimization

For even more specific actions you can take toward mainframe cost reduction, watch this insightful webinar:

IBM webcast thumb 052017 300x169 Mainframe Cost Reduction Tips to Maximize Your Data Infrastructure ROI

The Future of Mainframe Optimization

In this IBM Systems Magazine webinar, you’ll learn about key mainframe optimization problems, opportunities, and use cases. Discover how the latest innovations for zIIP and sort will save you both time and money, game-changing features of workload-centric database optimization for DB2 and IDMS, and new solutions for long-standing network management issues.

Watch the IBM Systems Magazine webinar now >

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

U.S. House passes bill to renew NSA warrantless internet surveillance

 U.S. House passes bill to renew NSA warrantless internet surveillance

(Reuters) — The U.S. House of Representatives on Thursday passed a bill to renew the National Security Agency’s warrantless internet surveillance program, overcoming objections from privacy advocates and confusion prompted by morning tweets from President Donald Trump that initially questioned the spying tool.

The legislation, which passed 256-164 and split party lines, is the culmination of a years-long debate in Congress on the proper scope of U.S. intelligence collection – one fueled by the 2013 disclosures of classified surveillance secrets by former NSA contractor Edward Snowden.

Senior Democrats in the House had urged cancellation of the vote after Trump appeared to cast doubt on the merits of the program, but Republicans forged ahead.

Trump initially wrote on Twitter that the surveillance program, first created in secret after the Sept. 11, 2001, attacks and later legally authorized by Section 702 of the Foreign Intelligence Surveillance Act (FISA), had been used against him but later said it was needed.

Some conservative, libertarian-leaning Republicans and liberal Democrats attempted to persuade colleagues to include more privacy protections. They failed on Thursday to pass an amendment to include a requirement for a warrant before the NSA or other intelligence agencies could scrutinize communications belonging to an American whose data is incidentally collected.

Thursday’s vote was a major blow to privacy and civil liberties advocates, who just two years ago celebrated passage of a law effectively ending the NSA’s bulk collection of U.S. phone call records, another top-secret program exposed by Snowden.

The bill as passed by the House would extend the NSA’s spying program for six years with minimal changes. Some privacy groups said it would actually expand the NSA’s surveillance powers.

Most lawmakers expect it to become law, although it still would require Senate approval and Trump’s signature. Republican Senator Rand Paul and Democratic Senator Ron Wyden immediately vowed to filibuster the measure, but it was unclear whether they could persuade enough colleagues to force changes.

The Senate will hold a procedural vote on the bill next week after it returns from a break, U.S. Senate Majority Leader Mitch McConnell said on Thursday.

“The intelligence community and the Justice Department depend on these vital authorities to protect the homeland and keep Americans safe,” McConnell, a Republican, said in a statement.

The White House, U.S. intelligence agencies and Republican leaders in Congress have said they consider the surveillance program indispensable and in need of little or no revision.

Before the vote, a tweet from Trump had contradicted the official White House position and renewed unsubstantiated allegations that the previous Democratic administration of Barack Obama improperly surveilled the Republican’s 2016 presidential campaign.

“This is the act that may have been used, with the help of the discredited and phony Dossier, to so badly surveil and abuse the Trump Campaign by the previous administration and others?” the president wrote in a tweet.

“We need it!”

The White House did not immediately respond to a request to clarify Trump’s tweet, but he posted a follow-up less than two hours later, after speaking on the phone with House Republican leader Paul Ryan.

“With that being said, I have personally directed the fix to the unmasking process since taking office and today’s vote is about foreign surveillance of foreign bad guys on foreign land. We need it! Get smart!” Trump tweeted.

Unmasking refers to the largely separate issue of how Americans’ names kept secret in intelligence reports can be revealed.

After the vote Thursday, Ryan, asked about his conversation with the president, said Trump’s concerns regarded other parts of the law.

“It’s well known that he has concerns about the domestic FISA law. That’s not what we’re doing today. Today was 702, which is a different part of that law. … He knows that and he, I think, put out something that clarifies that,” Ryan told reporters.

Asked by Reuters at a conference in New York about Trump’s tweets, Rob Joyce, the top White House cyber official, said there was no confusion within the Oval Office about the value of the surveillance program and that there have been no cases of it being used improperly for political purposes.

Trump’s tweets on surveillance marked the second time this week that he appeared to veer from the administration’s position. During a meeting on Tuesday to discuss immigration with a bipartisan group of legislators he initially voiced support when Democratic Senator Dianne Feinstein suggested a “clean” bill to protect undocumented immigrants brought to the United States as children.

House Majority Leader Kevin McCarthy pointed out that a “clean” bill would not include the security and border wall that Trump has insisted be part of any immigration plan.

Press secretary Sarah Sanders told reporters there was no contradiction in Trump’s tweets on the surveillance program and that he was voicing broader concerns about FISA.

Without congressional action, legal support for Section 702 will expire next week, although intelligence officials say it could continue through April.

Section 702 allows the NSA to eavesdrop on vast amounts of digital communications from foreigners living outside the United States through U.S. companies such as Facebook, Verizon Communications, and Alphabet’s Google.

Let’s block ads! (Why?)

Big Data – VentureBeat