• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Testing

Waymo to expand autonomous truck testing in the American Southwest

June 30, 2020   Big Data

Today during a briefing with members of the media, Waymo head of commercialization for trucking Charlie Jatt outlined the company’s go-to-market plans for Waymo Via, its self-driving delivery division. In the future, Waymo will partner with OEMs and Tier 1 suppliers to equip cloud-based trucks manufactured and sold to the market with its autonomous systems. In addition, Waymo will work with fleets to provide its software services and offer support for things like mapping and remote fleet assistance.

As Waymo transitions to this model, Jatt said that Waymo intends to own and offer its own fleet of trucks — at least in the short term. One of the delivery solutions it’s exploring is a transfer-hub model where, rather than an automated truck covering an entire journey, it’ll be a mix of an automated portion and a portion involving manually driven, human-manned trucks. Automated vehicle transfer hubs close to highways would handle the switch-off and minimize surface street driving.

In a first step toward this vision, Waymo says it’ll soon expand testing on roads in New Mexico, Arizona, and Texas along the I-10 corridor between Phoenix and Tuscon, as previously announced. It this year mapped routes between Phoenix, El Paso, Dallas, and Houston and ramped up testing in California on freeways in Mountain View, but the focus in 2020 will be on the American Southwest.

 Waymo to expand autonomous truck testing in the American Southwest


VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Tests will be primarily along Interstates 10, 20, and 45 and through metropolitan areas like El Paso, Dallas, and Houston. Chrysler Pacifica vans retrofitted with Waymo’s technology stack will map roads ahead of driverless Peterbilt trucks as part of a project known as Husky.

Waymo is also engaged with local delivery under the Waymo Via umbrella, the company reiterated. It currently has two partnerships in the Phoenix area — one with AutoNation and one with UPS. On the AutoNation side, Waymo is performing “hot shot” deliveries where Waymo vehicles travel to certain AutoNation locations and deliver car parts. And on the UPS side, the company is ferrying packages from stores to UPS sorting centers.

Waymo began piloting dedicated goods delivery with class A trucks — 18-wheelers — in 2017. After completing tests in 2018 with real loads from Google datacenters in Atlanta, Waymo began limited testing on roads in the San Francisco Bay Area, Michigan, Arizona, and Georgia, and on Metro Phoenix freeways.

Waymo’s autonomous trucks employ a combination of lidars, radars, and cameras to understand the world around them. They have roughly twice as many sensors as Waymo’s cars to handle the trucks’ unique shape and the occlusions they cause, and they place a greater emphasis on long-length perception (the perception range is somewhere beyond 300 meters). But they use the same compute platform found in the fifth-generation Waymo Driver.

As the pandemic drives unprecedented growth in the logistics and ground transportation market, Aurora, TuSimple, and other rivals are investing increased resources in fully autonomous solutions. They stand to save the logistics and shipping industry $ 70 billion annually while boosting productivity by 30%; according to a recent study from the Consumer Technology Association, a quarter (26%) of consumers now view autonomous delivery technologies more favorably than before the health crisis.

Besides cost savings, the growth in autonomous trucking has been driven in part by a shortage of human drivers. In 2018, the American Trucking Associates estimated that 50,000 more truckers were needed to close the gap in the U.S., even despite the sidelining of proposed U.S. Transportation Department screenings for sleep apnea.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

U.S. will unveil data-sharing platform for autonomous vehicle testing

June 15, 2020   Big Data
 U.S. will unveil data sharing platform for autonomous vehicle testing

(Reuters) — On Monday, U.S. auto safety regulators will unveil a voluntary effort to collect and make available nationwide data on existing autonomous vehicle testing.

U.S. states have a variety of regulations governing self-driving testing and data disclosure, and there is currently no centralized listing of all automated vehicle testing.

California, for example, requires public disclosure of all crashes involving self-driving vehicles, while other states do not.

The National Highway Traffic Safety Administration (NHTSA) is unveiling the Automated Vehicle Transparency and Engagement for Safe Testing (AV TEST) initiative to provide “an online, public-facing platform for sharing automated driving system on-road testing activities.”

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

With many opinion polls showing deep skepticism about self-driving cars in the U.S., the effort aims to boost public awareness. NHTSA plans “online mapping tools” that will eventually show testing locations and activity data.

NHTSA deputy administrator James Owens said in an interview that providing better transparency “encourages everybody to up their game to help better ensure that the testing is done in a manner fully consistent with safety.”

Fiat Chrysler, Toyota, Uber, Alphabet’s self-driving company Waymo, and Cruise — General Motors’ majority-owned self-driving subsidiary — are expected to take part. Participating states include California, Florida, Michigan, Ohio, Pennsylvania, and Texas, officials said.

NHTSA’s goal is to “pull together really critical stakeholders to deepen the lines of communication and cooperation among all of us,” Owens said, adding the effort was “an opportunity for the states to start sharing information among themselves.”

NHTSA will hold events this week to kick off the initiative, including panels featuring companies involved in autonomous vehicle testing, such as Nuro, Beep, Waymo, Uber, and Toyota.

Critics say NHTSA should mandate federal safety standards for automated driving systems.

The National Transportation Safety Board (NTSB) in its investigation of the March 2018 death of a pedestrian in a crash with an Uber test vehicle — the first attributed to a self-driving car — said in November that NHTSA should make self-driving vehicle safety assessments mandatory and ensure automated vehicles have appropriate safeguards.

Owens said NHTSA “will not hesitate” to take action if it believes unsafe vehicles are being tested on U.S. roads, but it has not adopted NTSB’s recommendations.

(Reporting by David Shepardson, editing by Peter Cooney.)

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Waymo resumes limited autonomous driving testing in Phoenix in first step back to normalcy

May 8, 2020   Big Data
 Waymo resumes limited autonomous driving testing in Phoenix in first step back to normalcy

Nearly two months after announcing it would halt operations around the country in response to the coronavirus pandemic, Waymo today said it will begin limited driving tests in the Metro Phoenix area starting on May 11. The Alphabet-backed autonomous vehicle startup described this as the first part of a “tiered approach” to gradually relaunch operations with its fleet, with commercial service to follow only after the health and safety of riders can be ensured.

That’s all to say that Waymo’s eponymous Waymo One ride-hailing service, which was put on pause in late March, won’t resume pickups and dropoffs just yet. A spokesperson told VentureBeat that Waymo plans to start accepting riders again in the “coming weeks.” To be clear, the phase of redeployment detailed today only involves safety and test drivers, as well as other employees and contractors working out of Waymo’s facilities.

Waymo says that it is following guidance from the U.S. Centers for Disease Control and Prevention, state, and local authorities as its operations resume, redesigning its facilities to respect social distancing guidelines and spacing out work areas by the recommended six feet. The company has also redefined the use of common areas and limited the maximum capacity for various spaces, and it has created trainings for employees around new safety standards and how to work and move around its facilities.

In line with Arizona guidance, Waymo employees will wear face masks in facilities or vehicles excepting instances where a person is driving alone. The company also says it has deep cleaned its buildings and that it will continue to conduct multiple daily cleanings of its vehicles in partnership with AutoNation. Lastly, Waymo says it is working with an occupational healthcare provider to screen all people before they enter its facilities.

VB Transform 2020 Online – July 15-17: Join leading AI executives at the AI event of the year. Register today and save 30% off digital access passes.

In the near future, Waymo exects to begin driving again in other cities, including San Francisco, Detroit, and Los Angeles. “We’re taking a thoughtful and measured approach towards bringing our driving operations back on the road,” wrote the company in a blog post. “Resumption of our driving operations in these locations will similarly be guided by ensuring the safety and health of our team in line with … [federal and state] guidance.”

In addition to Waymo, Uber, GM-backed Cruise, Aurora, Argo AI, and Pony.ai were are among the companies that suspended their driverless vehicle programs in the hopes of limiting contact between drivers and riders. In the interim, some, like Pony.ai and Cruise, pivoted to autonomous delivery. Others leaned heavily on simulation to continue development even as their fleets were grounded.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Coronavirus fears halt autonomous vehicle testing for Uber, Cruise, Aurora, Argo AI, Waymo, and others

March 18, 2020   Big Data

Following Waymo’s announcement that it would limit its ride-hailing service in Phoenix, Arizona and autonomous car testing on California roads in response to the COVID-19 pandemic, several competitors adopted similar measures today and earlier this week. Uber, GM’s Cruise, Aurora, Argo AI, and Pony.ai are among the companies that have suspended driverless vehicle programs in the hopes of limiting contact between drivers and riders.

“Our goal is to help flatten the curve of community spread,” said Uber Advanced Technologies Group (ATG) CEO Eric Meyhofer in a statement. “Following recent guidance from local and state officials in areas where we operate our self-driving vehicles, we are pausing all test track and on-road testing until further notice.”

Uber halted operations on March 16, and the company told VentureBeat that the ATG team continues to execute on projects from home with offline virtual simulation tools like Autonomous Visualization System and VerCD. Uber had briefly resumed autonomous vehicle testing in San Francisco starting March 10 over a month after it received a California Department of Motor Vehicles (DMV) license, and it previously was operating fleets manually in Dallas, Toronto, and Washington, D.C.

“The safety and well-being of our employees and our community is our top priority. Out of an abundance of caution, we have asked Cruisers across all our locations who can conduct their work remotely to do so until further notice.” –@ArdenMHoffman1, Chief People Officer

(1/2)

— Cruise (@Cruise) March 9, 2020

Cruise’s chief people officer Arden Hoffman said that Cruise has suspended operations and closed all San Francisco facilities for the time being, with a plan to reopen them in three weeks’ time. (The company confirmed that it plans to pay autonomous vehicle operators during the period.) One of the programs affected is a ride-hailing pilot in San Francisco called Cruise Anywhere that allows Cruise employees to use an app to get around mapped areas.

 Coronavirus fears halt autonomous vehicle testing for Uber, Cruise, Aurora, Argo AI, Waymo, and others

Aurora VP of operations Greg Zanghi told VentureBeat that Aurora’s entire team — including its test drivers — are working from home and that they will continue to get paid. In lieu of on-the-road tests, the company will use digital systems like its Virtual Test Suite to continue to fuel development and testing efforts.

“We recognize that this is an entirely unprecedented situation with unique challenges and we all need to come together and support one another,” said Zanghi. “While we continue to strive for work excellence, families come first and we are encouraging everyone to do what is needed to take care of their families. Our top priority is keeping our community safe and healthy, while also keeping our teams feeling supported, motivated, and connected.”

As for Argo AI, a spokesperson told VentureBeat that while it hasn’t experienced a “significant impact” due to the coronavirus, it has taken steps to allow work from home, including pausing car testing operations at all of its locations. Argo was conducting testing in Pittsburgh, where it’s based, as well as in Austin, Miami, Palo Alto, Washington, D.C. and Dearborn, Michigan.

“Argo AI places the highest priority on ensuring our employees and contractors have a safe, secure and healthy work environment,” said the spokesperson.

Pony.ai decided to suspend its public PonyPilot service for three weeks starting March 16 and its autonomous vehicle commuter pilot for the Fremont government. The company recently launched both programs following a multi-month robo-taxi service in Irvine, California dubbed BotRide, in partnership with Hyundai (which provided KONA Electric SUVs) and Via (which supplied the passenger booking and assignment logistics).

Tech giant Baidu has also ceased all self-driving activities in California, following Santa Clara county guidelines.

1/5 In the interest of the health and safety of our riders and the entire Waymo community, we’re pausing our Waymo One service with trained drivers in Metro Phoenix for now as we continue to watch COVID-19 developments.

— Waymo (@Waymo) March 17, 2020

Concern over the spread of the novel coronavirus was the chief motivator behind the industry-wide pauses in autonomous vehicle testing. Waymo said it made its decision “in the interest of the health and safety of our riders and the entire Waymo community,” and after at least one incident of a human safety driver in a Waymo vehicle refusing to pick up a passenger because a local case of COVID-19 had been reported. (Waymo continues to pick up passengers as part of its Waymo One program in Pheonix with a small number of completely driverless vehicles, however.)

In related news, Uber and Lyft today said they would stop allowing customers to order shared rides in order to prevent infection. Uber suggested that drivers roll down windows to “improve ventilation” and asked riders to wash their hands before and after entering cars.

In the U.S. at the time of writing, the total number of coronavirus cases and deaths stood at 4,226 and 75, respectively, as reported by the Center for Disease Control and Prevention.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Nest is testing detecting HVAC problems with AI

January 29, 2020   Big Data
 Nest is testing detecting HVAC problems with AI

Owners of Nest-branded thermostats will soon see their systems improved with a new feature designed to detect problems in heating and cooling (HVAC) systems. Google today announced that it’s testing algorithms tailored to identify “unusual patterns” related to HVAC systems controlled by Nest, which can then alert users to issues and put them in touch with a maintenance professional.

Based on information like the thermostat’s historical data and current weather, Nest will learn to spot patterns that might indicate something is wrong. (For example, if it’s taking longer than normal to heat a home, there might be a problem with a heating system.) With continuous feedback and thanks to the AI models under the hood, Google says the system will become better at detecting more possible anomalies over time.

Here’s how to opt in:

  • Open the Nest app
  • Tap Settings
  • Tap Notifications
  • Nest Home Report
  • Slide the slider on

Users will receive notifications via emails that’ll outline what the Nest thermostat detected and which system (heating or cooling) might have been the problem, or they can sign up for a daily report — the Nest Home Report — that will similarly spotlight the HVAC alerts. If necessary, they’ll be able to book an HVAC professional through gig marketplace Handy, initially in over 20 metro areas including Atlanta, Boston, Denver, Las Vegas, and San Diego and later in additional regions throughout the testing period.

Maintenance with Handy will include a general inspection of either the heating or cooling system and the Nest thermostat by a trained HVAC professional.  The price of the maintenance visit won’t include any additional work, however; if the technician determines that the system requires additional work, they’ll provide a description of the service and a quote with any further costs, which users will be able to approve or decline on the spot.

Nest customers who had their thermostat installed by a Nest Pro can choose to hire the Pro again if they choose from within the Nest app.

“With this new thermostat feature, you now have more insight into your heating and cooling system,” wrote Nest product manager Jeff Gleeson, who noted that the HVAC alerts aren’t meant to replace the diagnosis of a qualified HVAC professional. “[Our hope is that this will] help you look after your home.”

HVAC anomaly detection is a natural extension of Nest’s predictive capabilities. Every thermostat in the Google division’s product family taps algorithms to learn people’s schedules, the temperatures they’re used to, and when — chiefly by monitoring activity and usage patterns over the first weeks of ownership. Using built-in sensors and phones’ locations, it can shift into energy-saving mode when it realizes nobody is at home.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Baidu details its adversarial toolbox for testing robustness of AI models

January 18, 2020   Big Data
 Baidu details its adversarial toolbox for testing robustness of AI models

No matter the claimed robustness of AI and machine learning systems in production, none are immune to adversarial attacks, or techniques that attempt to fool algorithms through malicious input. It’s been shown that generating even small perturbations on images can fool the best of classifiers with high probability. And that’s problematic considering the wide proliferation of the “AI as a service” business model, where companies like Amazon, Google, Microsoft, Clarifai, and others have made systems that might be vulnerable to attack available to end users.

Researchers at tech giant Baidu propose a partial solution in a recent paper published on Arxiv.org: Advbox. They describe it as an open source toolbox for generating adversarial examples, and they say it’s able to fool models in frameworks like Facebook’s PyTorch and Caffe2, MxNet, Keras, Google’s TensorFlow, and Baidu’s own PaddlePaddle.

While the Advbox itself isn’t new — the initial release was over a year ago — the paper dives into revealing technical detail.

AdvBox is based on Python, and it implements several common attacks that perform searches for adversarial samples. Each attack method uses a distance measure to quantify the size of adversarial perturbation, while a sub-model — Perceptron, which supports image classification and object detection models as well as cloud APIs — evaluates the robustness of a model to noise, blurring, brightness adjustments, rotations, and more.

AdvBox ships with tools for testing detection models susceptible to so-called adversarial t-shirts or facial recognition attacks. Plus, it offers access to Baidu’s cloud-hosted deepfakes detection service via an included Python script.

“Small and often imperceptible perturbations to [input] are sufficient to fool the most powerful [AI],” wrote the coauthors. “Compared to previous work, our platform supports black box attacks … as well as more attack scenarios.”

Baidu isn’t the only company publishing resources designed to help data scientists defend from adversarial attacks. Last year, IBM and MIT released a metric for estimating the robustness of machine learning and AI algorithms called Cross Lipschitz Extreme Value for Network Robustness, or CLEVER for short. And in April, IBM announced a developer kit called the Adversarial Robustness Toolbox, which includes code for measuring model vulnerability and suggests methods for protecting against runtime manipulation. Separately, researchers at the University of Tübingen in Germany created Foolbox, a Python library for generating over 20 different attacks against TensorFlow, Keras, and other frameworks.

But much work remains to be done. According to Jamal Atif, a professor at the Université Paris-Dauphine, the most effective defense strategy in the image classification domain — augmenting a group of photos with examples of adversarial images — at best has gotten accuracy back up to only 45%. “This is state of the art,” he said during an address in Paris at the annual France is AI conference hosted by France Digitale. “We just do not have a powerful defense strategy.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Optimize Your Marketing Strategy With A/B Testing

November 25, 2019   CRM News and Info

Having too many good ideas is a great problem to have, but knowing which of those will resonate best with your audience is not. Choosing the wrong subject line, content asset, or design can lead to lackluster results and send your hard work down the drain. While most of us are allowed a marketing miss or two, one too many could lead to wasted efforts and missed opportunities for ROI. 

AB Testing Blog Optimize Your Marketing Strategy With A/B Testing

If you’re like me, you probably prefer to stick to creative tasks and leave data and analytics to others on your team. The problem with that approach is that it leads to random acts of marketing that might not always correlate with the interests and pain points of your target audience or where they are in the sales funnel. So, in order to get the results you want, you need to launch personalized marketing efforts that engage, nurture, and convert your audience. 

So what can marketers do to hit the nail on the head when it comes to producing effective and engaging marketing efforts? To start, you have to stop looking at the process of gathering and analyzing data as an unbearable and impossible task, and start thinking of it as a source of inspiration. When fully leveraged, good data opens the door for you to produce creative campaigns that generate results. By taking time to test your efforts and gather insights, you’re carving the way toward producing a more targeted and effective digital marketing strategy. 

Today, we’re on a quest to help you eliminate the guesswork by outlining how your marketing team can benefit from practicing A/B testing. Keep reading if you’re ready to start gathering data that will help you optimize engagement and drive sustainable results. 

What Is A/B Testing and Why Should You Implement It? 

In the marketing world, A/B testing allows you to compare two versions of the same marketing asset with one distinct variable to determine which one resonates best with your audience and will garner better results. If you want to reap the full rewards of A/B testing, it is important to make this process a continuous effort. You can continue to test the same asset (also known as the constant, this is the version that usually generates the best results) against a new contender every time.

But why is A/B testing so important and why should it be a continuous effort? Repeating this process will help you uncover insights regarding your audience’s preferences and enable you to optimize your efforts for better engagement and conversions. While there might be a great amount of guesswork the first go around, you’ll eventually learn what your audience wants to hear and see — and when. And the more you practice A/B testing, the more your focus will shift from figuring out what works to fine-tuning campaigns and efforts that you already know will be a big hit. 

What Should You A/B Test, Why, and How Often? 

Now that you get the gist of what A/B testing is, you’re probably wondering how to implement it as a holistic part of your marketing plan. You can A/B test pretty much anything, but we recommend that you focus on the variables that are likely to pack the most punch to begin with and then move on to more minute details. 

For example, if you’re trying to generate better results from your email marketing, you should consider the buyer journey as you work on A/B testing different elements. Your first goal should be to get your email delivered and opened, so your subject line is a good place to start. Once you’ve gathered enough data to understand what kind of subject lines resonate with your audience, you can move on to testing copy and CTAs. The end goal is to optimize your efforts so that they motivate your customers to keep progressing through the entire customer journey. 

As to how often you should A/B test your efforts, that’s really up to you, but we recommend not to let your efforts remain stagnant for too long. At Act-On, we meet monthly to review the results of our campaigns and optimize them accordingly. We’re very fortunate that the Act-On platform makes it easy to A/B test elements in email, landing pages, and forms so that we can ensure that every single effort we launch is tailored to pique the interest of our audience, encourage them to engage, and convince them to convert. 

What Can You Do With the Insights You Collect Through A/B Testing? 

As we’ve mentioned a few times, the greatest benefit of A/B testing is that it allows you to optimize your efforts. A mistake that many marketers make, however, is not looking at the bigger picture when analyzing their results. Whether you’re testing a subject line, preview text, design, or CTA, you should use your findings to inform your overall marketing strategy and business efforts.

Think about where else you can apply your findings aside from the variable at hand. These are a few areas where you can leverage the results of your A/B testing for maximum impact: 

  • Content Strategy: Using the results you gather from A/B testing CTAs and topics is a great way to determine which direction to take when it comes to developing your content strategy. If you notice that certain content pieces or topics are generating a lot of buzz across the board, that’s a good indicator that you should produce similar content in the future. 
  • Sales Funnel Optimization: Have you noticed that, despite the fact that you’ve invested a great amount of time A/B testing and optimizing your efforts, your leads still seem to disappear at certain points in the customer journey? If so, you should take a step back to determine what kind of changes you can make (beyond testing variables) to improve the customer journey and keep your target audience moving through the sales funnel. 
  • Pricing Strategy: Comparing your pricing with your competitors is a good place to start if you want to determine how much to charge customers for products or services. A/B testing prices with your own customers, however, can provide more thorough insight into how much your customers are willing to pay for what you have to offer. This practice can help you figure out a pricing strategy moving forward and also provide information about things you can do to increase the value of your offerings in the eyes of your consumers.

You don’t have to be sneaky about testing pricing. Sustainable clothing company Everlane, for example, has a pretty interesting approach to A/B testing pricing on their website. The company’s “Choose What You Pay” section gives customers the option to pick one of three listed prices on overstock items. This enables the company to get rid of surplus stock by offering discounted prices while collecting important data that tells them how much customers are willing to pay for their products.

  • Paid Search: Whether you’re a B2B or B2C organization, chances are you rely on some sort of paid ads to capture the attention of your target audience and generate new opportunities. A/B testing can help you determine the best social media platform to use to promote your ads, the most effective placement, and even which keywords to use. This will not only lead to better conversions but also help you effectively manage where you allocate your budget. 
  • Email Marketing: If you’re not already A/B testing your email marketing efforts, then you should start doing so as soon as possible for multiple reasons. To start, you can’t see results from your email efforts if your messages are not getting delivered, opened, and read. 

In addition to following best practices for deliverability, you need to ensure that everything from your subject line to your copy, design, and CTAs all resonate with your audience and motivate them to convert. Testing each email element individually can help you uncover many insights about your target audience’s preferences, which you can use to inform and optimize your email marketing strategy over time. 

The Right Platform Can Make A/B Testing Your Efforts Second Nature and Empower You to Leverage Your Findings

Let’s be honest, this is probably not the first time you’ve heard about the benefits of A/B testing, but you’re probably not making it a consistent effort because of the amount of time and resources it takes to test variables, gather data, and analyze your results. Implementing this practice and leveraging your findings doesn’t have to be a tedious task, however — especially if you’re using a comprehensive marketing automation tool such as Act-On. 

Act-On not only makes the A/B testing process a breeze — allowing you to test practically any variable on everything from emails to landing pages to your website —  but it also allows you to put the insights you gather into action easily. Our platform’s Data Studio enables you to consolidate your data and create reports so you can easily analyze your results. To top it all off, Act-On provides the tools you need to easily build email nurture campaigns and landing pages, score leads, segment your audience, and more. 

If you’d like to learn more about how Act-On’s powerful marketing automation platform can help you enhance every single aspect of your marketing strategy, we invite you to schedule a demo with one of our digital marketing experts. 

Let’s block ads! (Why?)

Act-On Blog

Read More

REST API Testing Strategy: What Exactly Should You Test?

October 10, 2019   Sisense
RestAPI Yoast 1200X6281 REST API Testing Strategy: What Exactly Should You Test?

The API layer of any application is one of the most crucial software components. It is the channel which connects client to server (or one microservice to another), drives business processes, and provides the services which give value to users. 

A customer-facing public API that is exposed to end-users becomes a product in itself. If it breaks, it puts at risk, not just a single application, but an entire chain of business processes built around it. 

Mike Cohn’s famous Test Pyramid places API tests at the service level (integration), which suggests that around 20% or more of all of our tests should focus on APIs (the exact percentage is less important and varies based on our needs).

Once we have a solid foundation of unit tests which cover individual functions, API tests provide higher reliability covering an interface closer to the user, yet without the brittleness of UI tests.

API tests are fast, give high ROI, and simplify the validation of business logic, security, compliance, and other aspects of the application. In cases where the API is a public one, providing end-users programmatic access to our application or services, API tests effectively become end-to-end tests and should cover a complete user story. 

So the importance of API testing is obvious. Several methods and resources help with HOW to test APIs — manual testing, automated testing, test environments, tools, libraries, and frameworks. However, regardless of what you will use — Postman, supertest, pytest, JMeter, mocha, Jasmine, RestAssured, or any other tools of the trade — before coming up with any test method you need to determine what to test… 

API test strategy 

The test strategy is the high-level description of the test requirements from which a detailed test plan can later be derived, specifying individual test scenarios and test cases. Our first concern is functional testing — ensuring that the API functions correctly.

The main objectives in functional testing of the API are: 

  • to ensure that the implementation is working correctly as expected — no bugs!
  • to ensure that the implementation is working as specified according to the requirements specification (which later on becomes our API documentation).
  • to prevent regressions between code merges and releases.

API as a contract — first, check the spec!

An API is essentially a contract between the client and the server or between two applications. Before any implementation test can begin, it is important to make sure that the contract is correct. That can be done first by inspecting the spec (or the service contract itself, for example a Swagger interface or OpenAPI reference) and making sure that endpoints are correctly named, that resources and their types correctly reflect the object model, that there is no missing functionality or duplicate functionality, and that relationships between resources are reflected in the API  correctly. 

The guidelines above are applicable to any API, but for simplicity, in this post, we assume the most widely used Web API architecture — REST over HTTP. If your API is designed as a truly RESTful API, it is important to check that the REST contract is a valid one, including all HTTP REST semantics, conventions, and principles (here, here, and here). 

If this is a customer-facing public API, this might be your last chance to ensure that all contract requirements are met, because once the API is published and in use, any changes you make might break customers’ code. 

(Sure, you can publish a new version of the API someday (e.g., /api/v2/), but even then backward compatibility might still be a requirement).

So, what aspects of the API should we test?

Now that we have validated the API contract, we are ready to think of what to test. Whether you’re thinking of test automation or manual testing, our functional test cases have the same test actions, are part of wider test scenario categories, and belong to three kinds of test flows.

API test actions 

Each test is comprised of test actions. These are the individual actions a test needs to take per API test flow. For each API request, the test would need to take the following actions: 

1. Verify correct HTTP status code. For example, creating a resource should return 201 CREATED and unpermitted requests should return 403 FORBIDDEN, etc.

2. Verify response payload. Check valid JSON body and correct field names, types, and values — including in error responses.

3. Verify response headers. HTTP server headers have implications on both security and performance.

4. Verify correct application state. This is optional and applies mainly to manual testing, or when a UI or another interface can be easily inspected.  

5. Verify basic performance sanity. If an operation was completed successfully but took an unreasonable amount of time, the test fails.

Test scenario categories

Our test cases fall into the following general test scenario groups:

  • Basic positive tests (happy paths)
  • Extended positive testing with optional parameters
  • Negative testing with valid input
  • Negative testing with invalid input
  • Destructive testing
  • Security, authorization, and permission tests (which are out of the scope of this post)

Happy path tests check basic functionality and the acceptance criteria of the API. We later extend positive tests to include optional parameters and extra functionality. The next group of tests is negative testing where we expect the application to gracefully handle problem scenarios with both valid user input (for example, trying to add an existing username) and invalid user input (trying to add a username which is null). Destructive testing is a deeper form of negative testing where we intentionally attempt to break the API to check its robustness (for example, sending a huge payload body in an attempt to overflow the system).   

Test flows

Let’s distinguish between three kinds of test flows which comprise our test plan:

  1. Testing requests in isolation – Executing a single API request and checking the response accordingly. Such basic tests are the minimal building blocks we should start with, and there’s no reason to continue testing if these tests fail.
  2. Multi-step workflow with several requests – Testing a series of requests which are common user actions, since some requests can rely on other ones. For example, we execute a POST request that creates a resource and returns an auto-generated identifier in its response. We then use this identifier to check if this resource is present in the list of elements received by a GET request. Then we use a PATCH endpoint to update new data, and we again invoke a GET request to validate the new data. Finally, we DELETE that resource and use GET again to verify it no longer exists.
  3. Combined API and web UI tests – This is mostly relevant to manual testing, where we want to ensure data integrity and consistency between the UI and API.

We execute requests via the API and verify the actions through the web app UI and vice versa. The purpose of these integrity test flows is to ensure that although the resources are affected via different mechanisms the system still maintains expected integrity and consistent flow.    

An API example and a test matrix 

We can now express everything as a matrix that can be used to write a detailed test plan (for test automation or manual tests). 

Let’s assume a subset of our API is the /users endpoint, which includes the following API calls: 

GET /users List all users 
GET /users?name={username} Get user by username
GET /users/{id} Get user by ID
GET /users/{id}/configurations Get all configurations for user 
POST /users/{id}/configurations Create a new configuration for user
DELETE /users/{id}/configurations/{id} Delete configuration for user
PATCH /users/{id}/configuration/{id} Update configuration for user

Where {id} is a UUID, and all GET endpoints allow optional query parameters filter, sort, skip and limit for filtering, sorting, and pagination. 

# Test Scenario Category  Test Action Category Test Action Description
1 Basic positive tests (happy paths)
Execute API call with valid required parameters Validate
status code:
1. All requests should return 2XX HTTP status code

2. Returned status code is according to spec: 
– 200 OK for GET requests
– 201 for POST or PUT requests creating a new resource 
– 200, 202, or 204 for a DELETE operation and so on

Validate
payload:
1. Response is a well-formed JSON object

2. Response structure is according to data model (schema validation: field names and field types are as expected, including nested objects; field values are as expected; non-nullable fields are not null, etc.)

Validate
state: 
1. For GET requests, verify there is NO STATE CHANGE in the system (idempotence)

2. For POST, DELETE, PATCH, PUT operations
– Ensure action has been performed correctly in the system by:
– Performing appropriate GET request and inspecting response
– Refreshing the UI in the web application and verifying new state (only applicable to manual testing)

Validate
headers:
Verify that HTTP headers are as expected, including content-type, connection, cache-control, expires,
access-control-allow-origin, keep-alive, HSTS and other standard header fields – according to spec.

Verify that information is NOT leaked via headers (e.g. X-Powered-By header is not sent to user). 

Performance sanity: Response is received in a timely manner (within reasonable expected time) – as defined in the test plan.
2 Positive + optional parameters
Execute API call with valid required parameters AND valid optional parameters

Run same tests as in #1, this time including the endpoint’s optional parameters (e.g., filter, sort, limit, skip, etc.) 

Validate
status code:
As in #1
Validate
payload:
Verify response structure and content as in #1.  

In addition, check the following parameters:
– filter: ensure the response is filtered on the specified value. 
– sort: specify field on which to sort, test ascending and descending options. Ensure the response is sorted according to selected field and sort direction.
– skip: ensure the specified number of results from the start of the dataset is skipped
– limit: ensure dataset size is bounded by specified limit. 
– limit + skip: Test pagination

Check combinations of all optional fields (fields + sort + limit + skip) and verify expected response.  

Validate
state:
As in #1
Validate
headers:
As in #1
Performance sanity: As in #1
3 Negative testing – valid input
Execute API calls with valid input that attempts illegal operations. i.e.:

– Attempting to create a resource with a name that already exists (e.g., user configuration with the same name)

– Attempting to delete a resource that doesn’t
exist (e.g., user configuration with no such ID)

– Attempting to update a resource with illegal valid data (e.g., rename a configuration to an existing name)

– Attempting illegal operation (e.g., delete a user configuration without permission.)

And so forth.

Validate
status code:
1. Verify that an erroneous HTTP status code is sent (NOT 2XX)

2. Verify that the HTTP status code is in accordance with error case as defined in spec 

Validate
payload:
1. Verify that error response is received

2. Verify that error format is according to spec. e.g., error is a valid JSON object or a plain string (as defined in spec)

3. Verify that there is a clear, descriptive error message/description field

4. Verify error description is correct for this error case and in accordance with spec 

Validate
headers:
As in #1
Performance sanity: Ensure error is received in a timely manner (within reasonable expected time)
4 Negative testing – invalid input
Execute API calls with invalid input, e.g.:

– Missing or invalid authorization token
– Missing required parameters
– Invalid value for endpoint parameters, e.g.:
– Invalid UUID in path or query parameters
– Payload with invalid model (violates schema)
– Payload with incomplete model (missing fields or required nested entities)
– Invalid values in nested entity fields
– Invalid values in HTTP headers
– Unsupported methods for endpoints 

And so on.

Validate
status code:
As in #1
Validate
payload:
As in #1
Validate
headers:
As in #1
Performance sanity: As in #1
5 Destructive testing
Intentionally attempt to fail the API to check its robustness:
Malformed content in request

Wrong content-type in payload

Content with wrong structure

Overflow parameter values. E.g.:
– Attempt to create a user configuration with a title longer than 200 characters

– Attempt to GET a user with invalid UUID
which is 1000 characters long

– Overflow payload – huge JSON in request body

Boundary value testing 

Empty payloads

Empty sub-objects in payload

Illegal characters in parameters or payload 

Using incorrect HTTP headers (e.g. Content-Type)

Small concurrency tests – concurrent API calls that write to the same resources (DELETE + PATCH, etc.)

Other exploratory testing

Validate
status
code:
As in #3. API should fail gracefully. 
Validate payload:

Validate headers:

As in #3. API should fail gracefully. As in #3. API should fail gracefully. 
Performance
sanity:
As in #3. API should fail gracefully. 

Test cases derived from the table above should cover different test flows according to our needs, resources, and priorities. 

Following the test matrix above should generate enough test cases to keep us busy for a while and provide good functional coverage of the API. Passing all functional tests implies a good level of maturity for an API, but it is not enough to ensure high quality and reliability of the API. 

In the next post in this series we will cover the following non-functional test approaches which are essential for API quality: 

Security and Authorization 

  • Check that the API is designed according to correct security principles: deny-by-default, fail securely, least privilege principle, reject all illegal inputs, etc.
    • Positive: ensure API responds to correct authorization via all agreed auth methods – Bearer token, cookies, digest, etc. – as defined in spec
    • Negative: ensure API refuses all unauthorized calls
  • Role Permissions: ensure that specific endpoints are exposed to user based on role. API should refuse calls to endpoints which are not permitted for user’s role
  • Protocol: check HTTP/HTTPS according to spec
  • Data leaks: ensure that internal data representations that are desired to stay internal do not leak outside to the public API in the response payloads
  • Rate limiting, throttling, and access control policies 

Performance

  • Check API response time, latency, TTFB/TTLB in various scenarios (in isolation and under load)

Load Tests (positive), Stress Tests (negative)

  • Find capacity limit points and ensure the system performs as expected under load, and fails gracefully under stress 

Usability Tests 

  • For public APIs: a manual “Product”-level test going through the entire developer journey from documentation, login, authentication, code examples, etc. to ensure the usability of the API for users without prior knowledge of our system.

Thanks for reading. To be continued!

Tags: developer | how to’s

Let’s block ads! (Why?)

Blog – Sisense

Read More

Putting CX at the Center of Testing Strategies

August 28, 2019   CRM News and Info

From e-commerce to banking applications to healthcare systems — and everything in between — if it’s digital, users expect it to work at every interaction, and on every possible platform and operating system.

However, despite the need to provide a digital experience that delights, Gartner research suggests that only 18 percent of companies are delivering their desired customer experience.

A big part of this gap between expectation and reality is that digital businesses depend on the quality of their software and applications, which frequently do not perform as they should. In an age when digital transformation is so dependent upon better quality software, testing has never been more critical. However, for the last decade, testing has focused on verification — that is, does it work? — rather than validation, meaning, does it do what I expect and want?

As companies progress in their digital transformation journeys, it’s critical that testing focus on answering the latter question. In other words, software testing must pivot from simply checking that an application meets technical requirements to ensuring that it delivers better user experiences and business outcomes.

Verification Validation

Testing needs to shift from a verification-driven activity to a continuous quality process. The goal is to understand how customer experiences and business outcomes are affected by the technical behavior of the application. More than this, though, it’s about identifying opportunities for improvements and predicting the business impact of those improvements.

Verification testing merely checks that the code complies with a specification provided by the business. These specifications are assumed to be perfect and completely replicate how users interact with and use the software.

However, there’s simply no way a specification writer could know how users will react to every part of the software or capture everything that could impact customer experience. Even if there were, it would make the software development painfully slow. By adopting this approach, the assumption is that validation also has been done as a result. However, this is a mirage rather than a reality, and has resulted in the customer experience being ignored from a software testing perspective.

Companies must abandon the outdated approach of testing only whether the software works, and instead embrace a strategy that evaluates the user perspective and delivers insights to optimize their experiences. If you care about your user experience and if you care about business outcomes, you need to be testing the product from the outside in, the way a user does. Only then can you truly evaluate the user experience.

A user-centric approach to testing ensures that user interface errors, bugs and performance issues are identified and addressed long before the application is live and has the chance to have a negative impact on the customer experience and, potentially, brand perception. Fast, reliable websites and applications increase engagement, deliver revenue, and drive positive business outcomes. Ensuring that these objectives are met should be an essential part of modern testing strategies.

For example, a banking app may meet all the specification criteria, but if it requires customers to add in their account details each time they want to access their account, they will lose patience quickly, stop using the app, and ultimately move to a competitor. This is exactly why businesses need to rethink how they evaluate software and applications and re-orient their focus to meet their customers’ expectations and needs.

If businesses want to close the customer experience gap, then they need to rethink how they evaluate software and applications. Validation testing should be a foundational element of testing strategies. However, organizations need to start testing the user experience and modernizing their approach so that they can keep up with the pace of DevOps and continuous delivery. This is an essential driver behind digital transformation.

Historically the only organizations carrying out validation testing have been teams with experienced manual exploratory testing capabilities. Exploratory testing evaluates functionality, performance and usability, and it takes into account the entire universe of tests. However, it’s not transparent, qualitative or replicable, and it’s difficult to include within a continuous development process. Manual exploratory testing is expensive to scale, as it’s time-consuming and the number of skilled testers is limited.

Customer-Driven Testing 101

Customer-driven testing is a new approach that automates exploratory testing for scalability and speed. Fundamentally, customer-driven testing focuses on the user experience rather than the specification. It also helps accelerate traditional specification-driven testing. Artificial intelligence (AI) and machine learning (ML) combined with model-based testing have unlocked the ability to carry out customer-driven testing.

The intelligent automation of software testing enables businesses to test and monitor the end-to-end digital user experience continuously; it analyzes apps and real data to auto-generate and execute user journeys. It then creates a model of the system and user journeys, and automatically generates test cases that provide robust coverage of the user experience — as well as of system performance and functionality.

Through automated feedback loops, you can zoom in on problems quickly and address them. Once that is in place, the intelligent automation can go even further — to where it builds the model itself by watching and understanding the system. It hunts for bugs looking at the app, the testing, and development to understand the risk.

It assesses production to clarify what matters to the business. This intelligence around risk factors and business impact direct the testing to focus in the right places. Unlike the mirage of testing to a specification, the actual customer journey drives the testing.

AI and ML technologies recommend the tests to execute, learning continuously and performing intelligent monitoring that can predict business impacts and enable development teams to fix issues before they occur. These cutting-edge technologies are core components of customer-driven testing, but another essential element is needed: human intelligence.

The Human Factor

Customer-driven testing doesn’t mean the death of the human tester. Machines are great at automating processes and correlating data but are not able to replicate the creative part of testing. This involves interpreting the data into actual human behavior and developing hypotheses about where problems are going to be.

The tester needs to provide hints and direction, as machines can’t replicate their experiences and intuition. Human creativity is essential to guide the customer-driven testing process.

Automated analytics and test products provide vast volumes of data about how a user behaved at the human-to-app interface, but it requires a human to understand why the person took that action. The human will set the thresholds for errors and will pull the levers and guide the algorithms, for example. Customer-driven testing is possible only with human testers augmented by state-of-the-art technology.

CX and the Path Forward

Digitization is rapidly changing the way companies and customers interact with each other. Understanding and optimizing the customer experience and ensuring apps deliver on business goals are now mission-critical for digital businesses. Practices that merely validate that software works must be retired, or organizations run the risk of lagging behind their competitors.

A new approach to testing is essential. The combination of AI-fueled testing coupled with human testers directing the automation makes customer-driven testing possible. If businesses want to close the customer experience gap, then they have to pivot and look at the performance of their digital products through the eyes of the customer. If software truly runs the world, then you need to make sure that it’s delighting your customers rather than merely working.
end enn Putting CX at the Center of Testing Strategies


Antony%20Edwards Putting CX at the Center of Testing Strategies
Antony Edwards is COO of
Eggplant.

Let’s block ads! (Why?)

CRM Buyer

Read More

Avoid a Black Friday, Cyber Monday Disaster With Intelligent Testing

August 24, 2019   CRM News and Info

Many online businesses rely on Black Friday and Cyber Monday to drive their profit margins. During this four-day period, retailers will see traffic on their site skyrocket.

How can retailers make sure their sites are robust and won’t fail during this critical period? The answer lies in the application of intelligent testing.

Black Friday traditionally has been the day when retailers finally break even for the year. “Black” in this case refers to accounts finally going into the black. The rise of online commerce has driven Black Friday to new heights. Now the sales phenomenon lasts over the whole weekend and into Cyber Monday.

Over the five days from Thanksgiving to Cyber Monday 2018, 165 million shoppers
spent more than US$ 300 each, on average.

Most online retailers will see a massive surge in traffic over the Black Friday weekend. In fact they will see a double whammy. Not only do more people visit — they visit repeatedly in their search for the best deals. As a result, retailers’ backend services are placed under enormous strain.

A failure during this period would be devastating, bringing bad headlines and loss of revenue, and probably losing valuable future custom. So, how do you avoid these pitfalls? The answer is to ensure your site is completely bombproof and can handle the surge in load without a problem.

Stress Testing

Stress testing refers to the process of adding load to your website until it fails, or until the performance drops below an acceptable level.

Typically, there are two types of stress testing. In the first, you check that your site can handle the expected peak traffic load. In the second, you steadily increase the load to try and push your site to fail. This is important, as you need to check that it fails gracefully. Traditionally, this sort of testing has been done in a very static manner, but as we will see, this isn’t very realistic.

API-Based Stress Testing

The earliest form of stress testing involved creating a script to repeatedly call your API. The API, or application program interface, is how a user’s client (browser or app) connects with your backend server. You can simulate users by calling this direct using command line tools like
cURL or using special tools like
SoapUI or
Artillery.

The idea is to place so much load on your back end that it fails. This approach has the advantage of simplicity, although it can be challenging to write the script. Each session will need its own API key, so you will need a script with enough smarts to handle all the keys and sessions.

However, there are three big drawbacks to this approach:

  1. Modern Web applications rely on dozens of interlinked APIs. This approach can’t test all these interactions properly.
  2. All sessions are coming from the same physical (and logical) source. This means that your load balancers will not be doing their job properly.
  3. Real users don’t interact in a predictable manner. Modeling this randomness is extremely hard in a test script.

API testing is still useful, but typically only for verifying the behavior of the APIs.

The Importance of Realism

Once upon a time, a website was a simple beast. It typically used a LAMP stack with a Linux server, Apache webserver, MySQL database and PHP front end. The services all ran on a single server, possibly replicated for handling failures. The problem is, that model doesn’t scale. If you get a flash crowd, Apache is quickly overwhelmed, and users will see an error page.

Nowadays, sites are far more complex. Typically, they run from multiple locations (e.g. East Coast and West Coast). Sessions are shared between sites using a load balancer. This ensures all your sites get equal load using various heuristics to assign the load, such as source IP address.

Many sites are now containerized. Rather than a single server, the application is built up from a set of containers, each providing one of the services. These containers usually are able to scale up to respond to increased demand. If all your flows come from the same location, the load balancer will struggle to work properly.

Session-Based Testing

Tools like
LoadNinja and
WebLOAD are designed to provide more intelligent testing based on complete sessions. When they access a website, users create a user session. Modern websites are designed so that these sessions are agnostic to the actual connection. For example, a user who moves from a WiFi hotspot to cellular data won’t experience a dropped session. The geekier among you will know that “session” is layer 5 of the OSI model. Testing at this layer is far better than API testing, since it ensures all APIs are called in the correct order.

Generally, these tools need you to define a handful of standard user interactions or user journeys — for instance, a login flow, a product search, and a purchase. Often the tool records these user journeys. In other cases, you may have to define the test manually, much like you do when using Selenium for UI testing.

Having defined the user journeys, you then can use them to run stress tests. These tests are definitely better than the API testing approach. They are more realistic, especially if you define your scenarios well. However, they still have the major drawback that they are running from a single location. They also suffer from the issues that impact all script-based testing — namely, selector changes.

The Importance of Selectors

Ever since Selenium showed the way, script-based testing tools have used JavaScript selectors to identify UI elements in the system under test. Elements include buttons, images, fields in forms, and menu entries. For load testing tools, these elements are used to create simple scenarios that test the system in a predictable fashion. For instance, find and click the login button, enter valid login details, then submit.

The problem is, JavaScript selectors are not robust. Each time you change your CSS or UI layout, all the selectors will change. Even rendering the UI at a different resolution can trigger changes, as can using a different browser. This means that you need to update your scripts constantly. Tools like WebLoad attempt to help by ignoring most of the elements on the page, but you will still have issues if the layout changes.

Intelligent Testing

Recent advances in artificial intelligence (AI) have revolutionized testing. Tools such as
Mabl,
SmartBear and
Functionize have begun applying machine learning and other techniques to create intelligent testing tools.

The best of these tools employ intelligent test agents to replicate the behavior of skilled manual testers — for instance, providing virtually maintenance-free testing, and creating working tests direct from English test plans.

Typically, these tools use intelligence to identify the correct selectors, allowing them to be robust to most UI changes. It is even possible to create tests by analyzing real user interactions to spot new user flows. Rather than just test simple user journeys, intelligent test agents can create extremely rich and realistic user journeys that take into account how a real user interacts with your website.

Intelligent Selectors

AI allows you to build complex selectors for elements on a Web page. These selectors combine many attributes — such as the type of element, where it is in relation to other elements, what the element is called, CSS selectors, even complex XPaths.

Each time a test is run, the intelligent test agent learns more about every element on the page. This means that its tests are robust to CSS and layout changes — for instance, if the Buy Now button moves to the top of the page and is colored green to make it more prominent.

Complex User Journeys

Test scripts often use simplified user journeys. This is because each script takes days to create and days more to debug. Intelligent test tools support the creation of richer user journeys.

They typically fall into two types: intelligent test recorders and
natural language processing systems. The first records users as they interact with the website, using AI to cope with things like unnecessary clicks, clicks that miss the center of an element, etc. Intelligent test agents use NLP to take plain English test plans and use these as a set of instructions to the test system.

Cloud-Based Testing

AI requires significant computing resources, and thus most intelligent test tools run as cloud-based services. Each test is typically run from a single virtual server. These virtual servers may exist in multiple geographic locations — for instance,
AWS offers seven locations in the U.S. and a further 15 worldwide. This means each test looks like a unique user to your load balancer.

Intelligent Stress Testing

Intelligent test agents combine realistic user journeys with testing from multiple locations, giving you intelligent stress testing. Each cloud location starts a series of tests, ramping up the load steadily. As each test completes, a new test is started. This takes into account the different duration of each test (allowing for network delay, server delay, etc.)

This means you can generate tens of thousands of sessions that look and behave exactly like real users. Better still, you can record session-by-session exactly how your site responds. This allows you to see which pages are likely to cause problems, and gives you detailed insights into how your site performs under load.

This approach addresses all the problems with both API and session-based test tools. The test sessions look and behave exactly like real users, so they will generate the correct sequence of API calls. Your load balancers and infrastructure will behave as they should, because each session looks unique.

Finally, the system is intelligent, so it won’t try to call a new API before a page is properly loaded. This is a marked contrast to other approaches where you tend to use just a fixed delay before you start the next action.

Stress testing is essential for any e-commerce site, especially in the run-up to Black Friday and Cyber Monday. Traditional approaches, such as API and session-based testing help when you have a monolithic infrastructure, but modern websites are far more complex and deserve better testing.

Intelligent test agents offer stress testing that is far more accurate and effective, allowing you to be confident in how your site will behave under realistic conditions. It also can give you peace of mind that any failure will be handled gracefully.
end enn Avoid a Black Friday, Cyber Monday Disaster With Intelligent Testing


Paul%20Clauson Avoid a Black Friday, Cyber Monday Disaster With Intelligent Testing
Paul Clauson is chief evangelist at
Functionize.

Let’s block ads! (Why?)

CRM Buyer

Read More
« Older posts
  • Recent Posts

    • Bad Excuses
    • Understanding CRM Features-Better Customer Engagement
    • AI Weekly: Continual learning offers a path toward more humanlike AI
    • The Easier Way For Banks To Handle Data Security While Working Remotely
    • 3 Ways Data Virtualization is Evolving to Meet Market Demands
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited