• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Over

Mom called – wanna come over for dinner?

August 19, 2019   Humor

Posted by Krisgo

 Mom called – wanna come over for dinner?

via

Advertisements

Like this:

Like Loading…

About Krisgo

I’m a mom, that has worn many different hats in this life; from scout leader, camp craft teacher, parents group president, colorguard coach, member of the community band, stay-at-home-mom to full time worker, I’ve done it all– almost! I still love learning new things, especially creating and cooking. Most of all I love to laugh! Thanks for visiting – come back soon icon smile Mom called – wanna come over for dinner?


Let’s block ads! (Why?)

Deep Fried Bits

Read More

Power BI Premium new capacity settings allow for more control over datasets

August 16, 2019   Self-Service BI

Five new Power BI Premium Capacity settings are available in the portal , preloaded with default values. Admin can review and override the defaults with their preference to better protect their capacity from running into issues and avoid problems before they arise. With the release of these new settings, Power BI admins now have the ability to set limits on “Max Offline Dataset Size” with the allowable range from 0.1 to 10 GB and/or set limits on queries to prevent noisy reports from impacting others using the capacity and create a more predictable and performant capacity.

As an admin, you can find these new settings on the Admin portal> Capacity settings page , expand the “Workload” chevron on the management tab of the capacity.

 

The table below lists out the detailed description of the settings and real- world scenario explaining the benefits of the settings.

Setting Name Description Scenario
Max Offline Dataset Size (GB) Maximum size of the offline dataset in memory. This is the compressed size on disk. Default value is set by SKU and the allowable range is from 0.1 – 10 GB When users are experiencing slowness due to a large dataset taking up memory resources , admins would very often end up in the similar cycle of first identifying the culprit datasets, contacting the owner or migrating to a different capacity. With this new setting, admins can now configure the dataset size and prevent report creators from publishing a large dataset that could potentially take down the capacity and secondly save the admin from the painful cycle of identifying and mitigating.
Query Memory Limit (%) Applies only to DAX measures and queries. Specified in % and limits how much memory can be used by temporary results during a query. Some queries/calculations can result in intermediate results that use a lot of memory on the capacity. This can cause other queries to execute very slowly, cause eviction of other datasets from the capacity, and lead to out of memory errors for other users of the capacity. Without this setting, capacity administrators would find it challenging to identify which report/query is causing the problem, so that they could work with the report author to improve the performance. With this new setting, admins can better control impact of bad or expensive report on others using the capacity.
Query Timeout (seconds) An integer that defines the timeout, in seconds, for queries. The default is 3600 seconds (or 60 minutes). Note that Power BI reports will already override this default with a much smaller timeout for each of its queries to the capacity. Typically, it is approximately 3 minutes. When users are experiencing spinning on the report primarily due to another expensive report, admins would sometimes have to move the workspace to a different capacity. With the new settings, admins can better control expensive queries so that they have less impact on other users of the capacity.
Max Intermediate Row Set Count The max number of intermediate rows returned by DirectQuery. Default value is set to 1000000 and allowable range is between 100000 and 2147483647 When a query to a DirectQuery dataset results in a very large result from the source database, it can cause a spike in memory as well as a lot of expensive processing of data. This can lead to other users and reports running low on resources. This setting allows the capacity administrator to adjust how many rows can be fetched by an individual query to the data source in a dataset.
Max Result Row Set Count Defines the maximum number of rows returned in a DAX query. Default value is set to -1(no limit) and allowable range is between 100000 and 2147483647 Sometimes, a user can execute an expensive DAX query that returns a very large number of rows. This can cause a lot of resource usage and affect other users and reports executing on the capacity. This setting allows the capacity administrator to limit how many rows should be returned for any individual DAX query.

Moving forward

Be sure to submit your ideas for new features and learn more about Power BI Premium and its capacities.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

These Minecraft videos show off over 500 hours of demonstrations

August 1, 2019   Big Data
 These Minecraft videos show off over 500 hours of demonstrations

Who knew Minecraft offered such a rich training ground for AI and machine learning algorithms? Earlier this month, Facebook researchers posited that the hit game’s constraints make it well-suited to natural language understanding experiments. And in a newly published paper, a team at Carnegie Mellon describe a 130GB-734GB corpus intended to inform AI development — MineRL — that contains over 60 million annotated state-action pairs (recorded over 500 hours) across a variety of related Minecraft tasks, alongside a novel data collection scheme that allows for the addition of tasks and the gathering of complete state information suitable for “a variety of methods.”

“As demonstrated in the computer vision and natural language processing communities, large-scale datasets have the capacity to facilitate research by serving as an experimental and benchmarking platform for new methods,” wrote the coauthors. “However, existing datasets compatible with reinforcement learning simulators do not have sufficient scale, structure, and quality to enable the further development and evaluation of methods focused on using human examples. Therefore, we introduce a comprehensive, large-scale, simulator paired dataset of human demonstrations.”

For those unfamiliar, Minecraft is a voxel-based building and crafting game with procedurally created worlds containing block-based trees, mountains, fields, animals, non-player characters (NPCs), and so on. Blocks are placed on a 3D voxel grid, and each voxel in the grid contains one material. Players can move, place, or remove blocks of different types, and attack or fend off attacks from NPCs or other players.

As for MineRL, it includes six tasks with a variety of research challenges including multi-agent interactions, long-term planning, vision, control, and navigation, as well as explicit and implicit subtask hierarchies. In the navigation task, users have to move to a random goal location over terrain with variable material types and geometries, while in the tree chopping objective, they’re tasked with obtaining wood to produce other items. In another task, players are instructed to produce objects like pickaxes, diamonds, cooked meat, and beds, and in a survival task, they must devise their own goals and secure items to complete those goals.

Each trajectory in the corpus consists of a video frame from the player’s point-of-view and a set of features from the game-state, namely player inventory, item collection events, distances to objectives, and player attributes (health, level, achievements). That’s supplemented with metadata like timestamped markers for when certain objectives are met, and by action data consisting of all of the keyboard presses, changes in view pitch and yaw caused by mouse movement, click and interaction events, and chat messages sent.

To collect the trajectory data, the researchers created an end-to-end platform comprising a public game server and a custom Minecraft client plugin that records all packet-level communication, allowing the demonstrations to be re-simulated and re-rendered with modifications to the game state. Collected telemetry data was fed into a data processing pipeline, and the task demonstrations were annotated automatically.

The researchers expect MineRL will support large-scale AI generalization studies by enabling the re-rendering of data with different constraints, like altered lighting, camera positions, and other video rendering conditions and the injection of artificial noise in observations, rewards, and actions.  “We expect it to be increasingly useful for a range of methods including inverse reinforcement learning, hierarchical learning, and life-long learning,” they wrote. “We hope MineRL will … [bolster] many branches of AI toward the common goal of developing methods capable of solving a wider range of real-world environments.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

“Our Army manned the air, it ran the ramparts, it took over the airports, it did everything it had”

July 10, 2019   Humor
 Our Army manned the air, it ran the ramparts, it took over the airports, it did everything it had


Not Steampunks, but Electromagnetic ones, because “more discovery would be premature”. #RevolutionaryWarAirports is still trending. Lost in the irony is Cadet Bone Spurs being spun as doing a military recruiting event.

Trump’s speech took the audience through historic and cultural highlights of America’s 243-year history, lauding the revolutionaries that threw off the British yoke and created the new nation in 1776.

But for all his patriotic passion, the president got certain details wrong. The most obvious—and the one which set social media buzzing—was his apparent suggestion that airports were a key strategic target for Continental Army as it took the fight to the British in 1781.

During his tribute to the army, Trump explained its formation and early difficult years. Turning to the famous American victory at the Siege of Yorktown, the president said the army “manned the air, it rammed the ramparts, it took over the airports, it did everything it had to do.”

Social media users were quick to note that air travel did not develop in the U.S. until the early 1900s, with the Wright brothers first taking to the air in 1903. Trump appears to have forgotten this, considering he mentioned the brothers’ achievements earlier in his speech.

x

Donald Trump mocked for saying the Continental Army took over the airports in 1781—122 years before Wright Brothers’ first flight https://t.co/JoeHSuZ6E8

— Newsweek (@Newsweek) July 5, 2019

x

Fact check: there were no airports during the Revolutionary War. The Wright Brothers “achieved the first successful airplane flights on December 17, 1903” according to the National Park Service. https://t.co/PE4E7DTm5Y https://t.co/hatTBDFOh5

— Jim Acosta (@Acosta) July 5, 2019

x

“I’m a very honest guy,” Trump told reporters before and after making various false claims.

— Daniel Dale (@ddale8) July 5, 2019

The event was designed for use as campaign ads with the Leni Riefenstal touch.

“I have not yet begun my flight.” ― John Paul Jones



“We hold these truths to be self-evident, that the red zone has always been for loading and unloading; there is never stopping in a white zone.”  

x

My Dearest Elizabeth,

Still stuck at Atlanta. General Washington is not happy. He blames the King. Shit’s about to go down I tell ya.

With warmest regards and uttermost affection,

Thomas
#RevolutionaryWarAirports

— M (@IamMrRoboto) July 5, 2019

x

Did airports exist in the late 1700s or early 1800s? Definitely not.

Then what was President Trump talking about?https://t.co/zxTY6ExL2Y

— The New York Times (@nytimes) July 5, 2019

x

That’s because it was during the Revolutionary war.

— Subpoena Meredith McIver (@nicollefarup) July 5, 2019

Brand New Sherman Tanks



In other historical revisionism….

x

So to those looking for clarity on what’s ahead in the #2020Census litigation:

There it is: Clear as mud. Commerce may or may not adopt “a new rationale,” which may or may not require future discovery.

We’ll see if Judge Hazel has any questions requiring a phone conference.

— Adam Klasfeld (@KlasfeldReports) July 5, 2019

Let’s block ads! (Why?)

moranbetterDemocrats

Read More

Tier IV raises over $100 million to develop open source software for driverless cars

July 6, 2019   Big Data
 Tier IV raises over $100 million to develop open source software for driverless cars

Tier IV, a Japan-based driverless car software maintainer and provider, this week announced the closure of a round north of $ 100 million led by Sompo Japan Nipponkoa Insurance, with participation from Yamaha Motor, KDDI, JAFCO, and Aisan Technology. The fresh capital brings the company’s total raised to nearly $ 130 million following seed rounds totaling $ 28 million, and founder Shinpei Kato said it’ll fuel the global commercialization and expansion of Tier IV’s self-driving technology platform.

“Tier IV has a mission to embody disruptive creation and creative disruption with self-driving technology. We have derived a solid software platform and successfully integrated it with real vehicles,” said Kato. “It is time to step forward to real services, embracing functional safety and risk management.”

Tier IV, a University of Tokyo spinout founded in December 2015, spearheads the development of Autoware, which it describes as an “all-in-one” open source and BSD-licensed solution for autonomous vehicles. The platform supports things like 3D localization and mapping, 3D path planning, object and traffic signal detection, and lane recognition, plus tasks like sensor calibration and software simulation.

Tier IV funds this development in part by selling support equipment like remote controllers and logging devices, as well as desktops and laptops with Autoware preinstalled. Additionally, it offers subscription access to its data sets, labeling tools, and deep learning training services for $ 1,000 per year.

Tier IV’s stated mission is to “democratize” intelligent cars by enabling “any individual or organization” to contribute to their development. To this end, it and partner companies Apex.AI and Linaro 96Boards launched the nonprofit Autoware Foundation last December, which seeks to deploy Autoware in production products and services. The Foundation counts 30 companies among its membership, and Tier IV claims that Autoware has already been adopted by more than 200 organizations around the world to date, including Udacity (for its Nanodegree Program), the U.S. Department of Transportation Federal Highway Administration (in its CARMA platform), automotive manufacturers, and “many” self-driving startups.

Field tests of Autoware-powered cars have been conducted in over 60 regions in Japan and overseas “without incident,” according to Tier IV, and the company claims that vehicles running on its platform achieved level 4 autonomy (meaning they could operate safely without oversight in select conditions) as early as December 2017.

Tier IV competes to an extent with Baidu, which offers an open source driverless software stack of its own in Apollo. The Beijing-based tech giant claims that Apollo — which has grown to 400,000 lines of code, more than double the 165,000 lines of code the company announced in January 2018 — is now being tested, contributed to, or deployed by Intel, Nvidia, NXP, and over 156 global partners, including 60 auto brands. Notable Apollo collaborators include Chinese automobile manufacturers Chery, BYD Auto, and Great Wall, Hyundai Kia, Ford, and VM Motori.

Sign up for Funding Daily: Get the latest news in your inbox every weekday.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Introducing dataflow templates; A quick and efficient way to build your sales leaderboard and get visibility over your sales pipeline

June 30, 2019   Self-Service BI

Dataflows help organizations unify data from disparate sources and prepare it for reusability. You can use dataflows to ingest data from a large and growing set of supported sources including Dynamics 365, Salesforce, Azure SQL Database, Excel, SharePoint, and more. You can then map data to standard entities in the Common Data Model, in Azure Data Lake Storage Gen2. Ingesting data can sometimes be tedious and time consuming, especially when the schema between the source and destination is different.

A dataflow template can expedite this process by providing a predefined set of entities and field mappings to enable flow of data from your source to the destination, in the Common Data Model. A dataflow template commoditizes the movement of data which in turn reduces overall burden and cost for a business user. It provides you a head start to ingest data wherein you don’t need to worry about knowing and mapping the source and destination entities and fields – We do it for you, through dataflow templates.

This month, we are releasing a set of dataflow templates that help you unify data from popular CRM apps such as Dynamics 365 for Sales and Salesforce.

Dataflows template Description
Leads and opportunities

(Dynamics 365 for Sales)

Use this template to build your sales leaderboard and get visibility over your sales pipeline. Sales managers can use the data to monitor their sales team performance including lead conversion and opportunity closure rates. Salespersons can get visibility over their top opportunities and leads.
Quotes, orders, invoices

(Dynamics 365 for Sales)

Connect to quotes, order and invoice data in Dynamics 365 for Sales to analyze quote closure rates and product line performance.
Lead to cash

(Dynamics 365 for Sales)

Get actionable insights by analyzing data from Dynamics 365 for Sales. Whether you are a sales executive who is looking for visibility over the sales leaderboard or your top opportunities or a sales manager who wants to review your team performance or a VP of sales looking for revenue from deals, this template helps connect to the sales data you need.
Accounts, leads, opportunities

(Salesforce)

You store your accounts, leads and opportunities in Salesforce – No problem! Gain insights by analyzing lead conversion and opportunity closures from Salesforce.

Note: Salesforce trial accounts are not supported at this time

Creating a dataflow using the Dynamics 365 for Sales Lead to cash template

Start by creating a new dataflow and choose the Lead to cash template from the “Choose data source” screen in Power Query Online

Provide your Dynamics 365 for Sales instance that you want to connect to

Provide credentials to your Dynamics 365 for Sales instance

The ‘Edit Queries’ screen shows you the predefined entities for the Lead to cash template with your source data

By clicking on ‘Map to standard’ in top menu, you can note that the source data fields have already been mapped to the Common Data Model

Save and close your new dataflow to start the query validation process to ensure that data can be loaded

That’s it – Provide a name and save your dataflow. You just successfully created your dataflow using a dataflow template

Once you’ve created a dataflow, you can use Power BI Desktop and the Power BI service to create datasets, reports, dashboards, and apps that leverage the power of the Common Data Model to drive deep insights into your business activities.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

A Pod world: How you’d trade your data for services over a decentralized internet

April 7, 2019   Big Data

In an era of big data and AI, what are the roles of decentralized internet and data storage concepts? The tensions and contradictions of these parallel developments were unpacked at SXSW in a compelling talk, Designing For the Next 30 Years of the Web, by Justin Bingham (CTO of Janeiro Digital) and John Bruce (Co-founder and CEO of Inrupt). They presented a whole new way of storing data and therefore breaking the current privacy paradigm, and their approach merits discussion outside of just one tech conference.

Decentralizing the web

Data is the core of the internet. However, as the internet has evolved, the way data is exchanged has shifted significantly from the intentions of one of its creators, Sir Tim Berners-Lee, who had envisioned an internet where information exchange did not include the transfer of actual data to the requesting party. Instead, he believed data would only exist with its owner and the internet would consist of links to it for reading and writing purposes.

That’s why Berners-Lee started the Solid project, which defines standards for a decentralized internet. Personal data is kept by the individual user and not stored centrally with each service supplier.

Late last year, open source startup Inrupt built an application based on the Solid standards that enables a more peer-to-peer internet with Personal Online Data Stores (Pods) for everyone. The Solid network is fully conceptualized around these Pods that contain all the data of one person, whether it be your bank account or latest social posts. In this case, the data referring to you is fully owned by you. Both Inrupt and people within the Solid Community provide Pods that run on their respective servers, but you can also create your own Pod on your own server for ultimate privacy. There is no central owner of these Pods since this would undermine the Solid principal.

One single integration

With Pods, all services, from your favorite taxi company to your insurance company, would communicate through one API with your personal data, each having separate read and write access to different parts of that data whilst reading and writing simultaneously. To cater to this, Inrupt started working with Janeiro Digital to create an open standard that all applications can work with. The beauty of this is that applications only need to learn one standard and integrate with the Pod to provide a data-driven service. Integrations between different services are no longer required.

Imagine writing an application that could combine and show posts from different social networks; one would have to retrieve data from each of them. Instead, if each of these social networks stored their posts in the Pod, this new application could simply be granted access to all posts by its owner, reducing the number of integrations to just one. Furthermore, if this new application wants to combine posts with other personal data, it could easily grab that information from your Pod.

Big data & AI

So how would big data, machine learning, and AI work within such a Pod-based world? All of these concepts rely heavily on centralized storage, and Pods are anything but that; especially when Pods are hosted all over the world, with no guarantees on network availability.

If data cannot be accumulated and needs to be fetched and interpreted over millions of Pods, how would it be possible to perform any machine learning without a significant performance penalty? And even if the data could be replicated and combined with more data, wouldn’t this then contradict the whole idea of Solid in the first place? And even if that is feasible, though temporarily, wouldn’t users simply set their preferences so their data couldn’t be used for data mining?

The big players

The companies that service a huge chunk of the current centralized internet are the ones that most rely on possessing our data. The majority of their turnover, which drives their shareholder value, is based on the data they collect from us; data they will never willingly give up for the purpose of the greater good. These companies will not embrace initiatives like Solid.

On the other hand, Bruce and Bingham also explained how Pods can introduce new benefits to companies and customers by enabling instant access to more data. One example is the combination of wearable data with that of an insurance policy, where the step-counter of your smartwatch could instigate a lower premium. (Of course, that’s an over-simplified example, since in the real world, the user would likely also have to consent to sharing other data, such as the food they purchase, which could then result in an increased premium.) All in all, it is likely companies will use Pods to trade consents, where certain services will only be available if certain consents are given. It is up to you to decide if the benefit is worth the trade-off. But how fragile will this freedom of choice be when it comes to basic services like healthcare?

Catching up

The beauty of Solid lies with its simplicity, which showcases that it’s not compatible with current, complex website structures and their profit model of collecting data. The internet has become extremely vast and consists of many established platforms. Trying to change that will take an enormous amount of time, development effort, and most of all, goodwill. Having a completely new approach that disqualifies all existing applications out there can only succeed if it can grow to a similar size or bigger. Still, the Solid project is young; hopefully it will gain traction. Since the start of Inrupt, it has already seen a lot of attention, so the potential is certainly there.

Sandor Voordes is a Technical Director at Dept (Design & Technology Netherlands). A developer in the past, he now works as a technical lead and architect on large digital platforms. Sandor is a strong believer of a best of breed approach where solutions are a combination of several or more integrations. In the last 2 years, he was also part of the Dept task team to introduce GDPR and help Dept’s clients to become GDPR compliant.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Amy Webb highlights over 300 tech trends in annual report

March 10, 2019   Big Data

The Future Today Institute today unveiled the latest annual Tech Trends Report, which highlights 315 trends, up from 225 last year. This is the 12th year of the Tech Trends Report.

The report highlights top trends in areas like energy, robotics, AI, transportation, data, privacy, and security.

Future Today Institute director and New York University Stern School of Business professor Amy Webb will release the report and detail highlights in a presentation today at SXSW in Austin, Texas.

The report is written to be accessible to Fortune 500 companies as well as small business owners, universities, governments, and startups.

The 2019 Tech Trends Report comes hot on the heels of the release Tuesday of The Big Nine, a collection of optimistic, pragmatic, and catastrophic predictions about the future of humanity through the expansive influence of some of the largest tech companies in the world like Google, Microsoft, Tencent, and Baidu.

The report includes trend breakdowns by industry, from banking to beauty to tech as well as advice for how to plan for the future.

Webb argues that many organizations aren’t thinking the right way about time and how far into the future they should be making plans.

“To effectively plan for the future, organizations need to learn how think about time differently,” the report reads. “Start retraining yourself to think about change and disruption to your organization and industry across different timeframes and build actions for each. The next 12-36 months – tactical actions. 3-5 years – strategic action. 5-10 years – vision and R&D initiatives. 10+ years – how you and your organization can create.”

The report also includes optimistic, pragmatic, and catastrophic scenarios about things like a collision between autonomous driving data and regional privacy laws, flying taxis, and drones as a source of renewable energy.

Like last year, the 2019 Tech Trends Report begins with a look at artificial intelligence trends. AI has been a trend on the report for the past decade.

Webb expects AI that mimics people online like Molly and bias in AI systems to continue.

Bias isn’t just a problem that could lead to racially offensive results like Google Photos identifying a woman of African descent as a monkey or vision systems for autonomous vehicles that fail to see people of color, but can lead to decisions about people’s lives that aren’t fully understood or explainable. Real world consequences for bias AI will continue to proliferate, Webb predicts, as it decides things like who can rent a car, receive a bank loan, or receive certain medications.

Last year, the 2018 Tech Trends Report declared China was on its way to becoming an unmatched AI hegemon and this year said China has solidified that status. A report by business analytics firm Elsevier expects China could lead the world in total AI research papers produced in the next five years.

Evidence of this growing influence is highlighted not only by advances being made by companies like Baidu and Alibaba but also computer vision startups like Megvii’s Face ++ and SenseTime and that has attracted a $ 4.5 billion valuation.

The Chinese establishment of an equivalent of the U.S. Department of Defense’s DARPA research agency was also pointed to as evidence of China’s advantage in AI.

“No other country’s government is racing towards the future with as much concentrated force and velocity as China. The country’s extraordinary investments in AI could signal big shifts in the balance of geopolitical power in the years ahead,” the report reads.

In data, Webb points out that data governance and retention policies and data lakes are being adopted by more organizations

AI services from cloud providers was also highlighted as a major trend, noting that companies like Amazon’s AWS, Microsoft’s Azure, and Google Cloud Platform lead in the United States with Alibaba and Baidu leading the way in China. Since cloud customers often stick with their initial vendor, what’s at stake is no less than hundreds of billions of dollars.

Consolidation of talent and resources by the dominant companies will continue, a development Webb cautions should bring pause and consideration.

“When it comes to the future of AI, we should ask whether consolidation makes sense for the greater good, and whether competition—and therefore access—will eventually be hindered as we’ve seen in other fields such as telecommunications and cable,” the report reads.

Webb predicts that the export of China’s social credit score system and personal data records (PDR) with your school and work history, financial records, legal and travel and shopping information could also soon emerge.

“PDRs don’t yet exist, but from my vantage point there are already signals that point to a future in which all the myriad sources of our personal data are unified under one record provided and maintained by the Big Nine. In fact, you’re already part of that system, and you’re using a proto-PDR now. It’s your email address.”

Facial recognition and unique voice signatures, emotion detection, bone structure detection, and even personality recognition like the kind deployed by Cambridge Analytica to help get Donald Trump elected president. Synethic biometrics are also beginning to emerge.

“Political candidates, law firms, marketers, customer service reps and others are beginning to use new systems that review your online behavior, emails and conversations you have by phone, and in person, to assess your personality in real time. The goal: to predict your specific needs and desires,” the report reads.

Revenge porn and the use of systems like Amazon’s Rekognition by law enforcement and persistent audio surveillance systems that are always listening and analyzing conversations are also on the rise. WiFi and radio waves can now be used to identify a person’s location, sleep cycle, or emotional state.

The use of your voice to carry out computing with smart speakers is another clear trend. Webb cites research that says roughly 40 percent of U.S. households will get smart speakers by the end of 2019, and that about half of searches will be done with voice by 2020. Ambient computing in vehicle infotainment systems are also part of what Webb refers to as the voice assistant wars.

AI chipsets and unique programming languages for AI frameworks like Uber’s Python-based Pyro may also become a trend. Facebook AI Research chief scientist Yann LeCun recently suggested that deep learning may need a new programming language.

Among intelligent machines, Webb and FTI expect autonomous vehicles designed for last-mile travel or services like food delivery to grow in adoption.

Soft and molecular robotics, as well as teams of robots working together, and human-machine collaboration were identified as trends this year. Robot abuse — incidents of people attacking things like security or food delivery robots — is also a trend.

In transportation, drone operation centers for management of drone fleets, drone swarms, autonomous ships and underwater vehicles, and the return of supersonic commercial air travel.

For the second year in its history, the 2019 Tech Trends Report also includes a breakdown of the smartest cities on the planet.

Smart cities are those with initiatives for smart buildings or reduce waste, abundant public WiFi hotspots, and 4G or 5G connectivity.

The smartest city in the world, according to the report, is Copenhagen, Denmark. The top 10 cities are in Nordic nations of Denmark, Finland, Sweden, and Norway, while San Francisco at number 20 and Boston at number 22 are the only U.S. cities in the top 25. No African cities were listed in top 25.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Dave Chappelle Says R. Kelly And His “Goons” Threatened Him Over Skit!

January 19, 2019   Humor
Dave Chappelle1 Dave Chappelle Says R. Kelly And His “Goons” Threatened Him Over Skit!

Dave Chappelle says R. Kelly and his “goons” bum-rushed him over his famous skit mocking the singer’s urination video — but Dave’s sense of humor definitely saved the day.

The comic hopped onstage at the WeHo Improv Monday night with his friend and “Chappelle’s Show” costar, Donnell Rawlings. Dave revealed Kelly and his crew approached him in Chicago — during a Common concert — and made it clear they were, umm … pissed about his “Piss on You” skit.

Dave says things never got physical, but he says Kelly grilled him about the skit and asked how he could possibly have done such a thing.

The irony of that question was not lost on Dave, who had the most awesome and hilarious response.  Check out the video … classic story.

Chappelle also addressed some of the criticism he’s faced in the wake of Lifetime’s “Surviving R. Kelly” docuseries — saying he wants comedians to call out things that are wrong with the world, period.

We asked Dave about the criticism this week — but he opted to let his pal, D.L. Hughley, do most of the talking that night.

Source: TMZ

Share this:

Like this:

Like Loading…

DJ Khaled Joins ‘Bad Boys For Life’

WATCH: New Trailer For 'John Wick: Chapter 3 – Parabellum' Released!

Let’s block ads! (Why?)

The Humor Mill

Read More

ProBeat: We can’t get over how human Google Duplex sounds

November 24, 2018   Big Data
 ProBeat: We can’t get over how human Google Duplex sounds

Watch the above video. Then watch it again, but close your eyes. Listen carefully to the voice making a restaurant reservation.

Duplex — Google’s artificially intelligent chat agent that can arrange appointments over the phone — has started rolling out to a “small group” of Google Pixel phone owners in select cities (Atlanta, New York City, Phoenix, and San Francisco). For now, the feature only works in English, with some restaurants, and can’t handle any other businesses that take appointments.

As news of the feature becoming slowly available has spread, a lot of debate has focused on whether it’s worth the effort. As many have pointed out, it seems faster to just call the restaurant yourself than to have to input all that is required into Google Assistant and wait for a confirmation. There are plenty of scenarios where this is useful, though — if you have a speech impediment, social anxiety when making phone calls, in a location where you can’t place a call, the restaurant is closed when you want to make the reservation, and so on.

I want to focus on the other hotly discussed part of the news: the Google Duplex voice. Many can’t get over just how humanlike it sounds, although I’ve watched the video so many times that I’ve convinced myself it doesn’t sound human.

Too human

If you listen very closely, you will notice “mistakes” in how the Duplex AI speaks. I put mistakes in quotes because I’m not entirely sure Google wants the technology to perfectly mimic how a human assistant would conduct the conversation.

What Duplex actually says sounds extremely believable — especially the multiple thank-yous and the “ba-bye” at the end. But you can tell that something is a bit off if you pay attention to the pauses. They are a little too long, especially at the very beginning and at the end. For the start, a human might fill a gap like that with an umm or an uhh, out of respect for the person on the other side. At the end, it’s clear Duplex isn’t going to hang up first (until it gets some sort of confirmation, anyway).

That’s what I’m calling “mistakes.” But I don’t know if Google is striving for perfection. And frankly, I don’t think it should be.

Getting a conversational AI’s voice to not sound robotic makes sense — it’s simply more pleasant and comfortable to talk to. But having it perfectly replicate what a human would do? That’s simply too much of a good thing.

Disclosure and transparency

In this Duplex ad from earlier this year, here is how the voice introduced itself:

Hi! I’m the Google Assistant calling to make a reservation for a client. This automated call will be recorded.

In the call we recorded, the wording has changed slightly, removing the part that makes it crystal clear this is not a human calling:

Hi, I’m calling to make a reservation for a client. I’m calling from Google, so the call may be recorded.

I’m sure Google is still iterating here — the wording will likely change a few more times. The team could in fact be A/B testing multiple versions.

But there’s a reason this disclosure is in here. You’ll remember that Google received a ton of criticism after its initial Duplex demo in May — many were not amused that Google Assistant mimicked a human so well. In June, the company promised that Google Assistant using Duplex would first introduce itself.

This is a double-edged sword. If Duplex gets things wrong and screws up the conversation, it makes Google look bad. If Duplex tries too hard to act human, it comes off as creepy and … makes Google look bad.

The trick is to strike a perfect balance: accurate and intelligent, but also transparent and honest.

While Duplex is a user-facing feature, currently exclusive to Pixel phones, it is ultimately businesses that interface with the conversational AI. That’s the part it can’t screw up. Google has to tread lightly on that tightrope or the whole experience will come crashing down.

More videos to come

We may have recorded the first video of Duplex in action, but I suspect this is going to birth a whole genre of new content.

Duplex is going to mess up, and it will be hilarious. Duplex is going to make serious mistakes, and it will be concerning. Duplex is going to get things too right, and it will be scary.

But hey, at least the internet will document it with plenty of videos.

ProBeat is a column in which Emil rants about whatever crosses him that week.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • What’s wrong with your dog?
    • Creating Your First Model-driven App in Dynamics 365
    • Adyen CEO on AI for payments: ‘I was surprised how effective it was’
    • 125-Year-Old Cornell Store Reinvents Itself as Omnichannel Retailer
    • Dreamforce Points to New Disruption
  • Categories

  • Archives

    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2019 Business Intelligence Info
Power BI Training | G Com Solutions Limited