Your first tests
Fixing & maintenance
🕰 Coming soon
Control the expected behavior
Understand how your test are configured and bring more nuances to your tests
Waldo test configuration
Default step configuration
You should already have a validated foundation test and maybe several more. We mentioned that when you record a test in Waldo, the expected replay behavior for each step is automatically configured based on your behavior in the recorder.
The default step configuration is what turns your user flow into a test. It covers:
- functionality of buttons
- global UI checks for each screen.
The default step configuration consists of the following:
- An interaction: identifies your interactions with the screen. It’s the interaction that leads to the next step
- A time limit: the maximum amount of time Waldo will wait before attempting to perform an interaction
- A group of assertions: by default, it includes a Screen Similarity assertion at 85%, but you can add as many assertions as you need for each step.
These three attributes of your steps determine Waldo’s behavior when replaying your tests and are effective for most test scenarios.
Step instructions determines replay behavior
When replaying your test, Waldo attempts to replay each step in the same order. Like a real user, it attempts to navigate from Point A to B.
A real user, picking up cues from the UI, organically understands where each action is and if it’s clickable. It should even be able to navigate the app, even if the language is different. Waldo does precisely that.
Everything you need to know about Waldo’s run logic
The right time to perform an interaction
We can sum up Waldo's run logic as follows: during replays, Waldo analyzes the screen and attempts to pass all assertions for a specific step. When Waldo passes all the assertions for that step, it attempts to perform the step interaction.
By default, you only have one assertion per step: a screen similarity. Earlier, we learned that the Screen Similarity score is based on the number of elements, their positions on the screen, and their attributes. It does not take into account how screens look pixel-wise.
When you replay this test, Waldo scans the structure of your screen until it matches 85% (the default threshold) on Step 1. The moment the Screen Similarity assertion passes gives the signal to Waldo to perform the step interaction: click on SignUp.
Time limit forces Waldo to make a decision
The time limit is the maximum amount of time Waldo will wait to validate all the step assertions. If it cannot validate the assertions in the allocated time, Waldo drops all requirements for assertions and attempts the step interaction regardless. It will either pass the step or fail.
It’s essentially a way to tell Waldo, “this step should validate within a reasonable timeframe.”
Assertions determine Waldo’s expectations for each step
There are even more benefits to Waldo’s replay logic. If Waldo waits to validate all assertions before performing the step interaction, adding Specific Assertions can give you additional control on wait times for each step.
- Waiting for all assertions to pass
- Waiting to hit the time limit.
Whichever comes first. That’s one of the critical ways that Waldo’s replay engine prevents flakiness since it can easily navigate the volatility inherent to testing, such as a button not loading at the same speed on every run.
Assertions is the core of your Waldo test
An assertion captures an expectation about the state of an app screen. If a user flow ensures that a user can go from Point A to Point B in your current build, an assertion is what turns a user flow into a test by dictating the logic underpinning how each step replays.
When you add Specific Assertions to a step, you’re not only instructing a UI check for a specific element on the screen. You’re also telling Waldo that this element is essential to performing an interaction and can effectively control wait times more refinedly. Let’s look at how you can use them in this context.
The button loader example
In the default step configuration with only one Screen Similarity assertion at an 85% threshold, you could end up in a situation during a replay where Waldo can match the Similarity assertion. Still, the loading button is part of the 15% of UI elements that didn't match. Waldo validates the default assertion and attempts to click on the loading (inactive) button. The test fails because it can't move to step 2. Your test result is flaky, but it doesn't have to be with Waldo.
Add an assertion on key UI elements
You can add Specific assertions to each step to solve this type of flakiness. The key is to identify the UI element critical to navigating to the next step.
When you recorded this test, you waited for this to happen before clicking. To avoid it falling through the cracks of your default step Similarity assertion, all you have to do is add an assertion on the button in its fully loaded state.
The next time you run this test, Waldo will add this Specific Assertion to the list of assertions to validate for that step. It will not only wait for the Similarity assertion to pass but also for the Sign-Up text to appear in the button before clicking.
Your test is not time-flaky anymore.
Use this technique when your test failed because it didn't wait long enough for an element to load. It works wonders!
Time Limit and Interactions
From your step modal, you can edit both Time Limit and Interaction. Let’s take a moment to understand how you can use it to your advantage.
You can view Time Limit as the amount of time that is "too much" to validate all the assertions for a step.
If a step is failing, you don't want Waldo to wait indefinitely until your app decides to load the UI. Instead, you want the test to fail within a reasonable time as a signal that something is wrong with that step.
Let's look at how you can define reasonable time.
What "reasonable" wait time is NOT?
The first thing to note is that you should not expect the same execution speed between each step with Waldo as with a local simulator.
On each step, Waldo does the heavy computing to scan your screen structure, check and match assertions, and perform the step interaction. Our algorithm gets smarter and faster by the day, but you shouldn't expect the response time between steps to be instant and, therefore, not define a reasonable time as if you were using your app locally.
The good news is that Waldo usually clears each step in several seconds.
What IS a “reasonable” wait time?
By default, Waldo sets a 30-second Wait Time per step upon recording. That's plenty of time to validate each assertion during replay, and Waldo usually can do so in several seconds.
Indeed, we've seen that in most cases, if Waldo can't validate all assertions within 30 seconds, it means we found a bug. So you can consider the default Wait Time reasonable.
When to decrease or increase a step Wait Time?
There are several scenarios when you want to edit a step Wait Time. They fall into two categories:
- Increase wait time:
- Heavy data: Let’s say your app has to load a list of farms with thousands of entries and that data is heavy. It will take a long time to load. If this step interaction depends on loading a heavy data set, you might hit the 30-second Time Limit. By increasing the time limit to a higher value, you’re instructing Waldo to give more time to validate the assertions.
- A video needs to play before you can perform the step interaction. Assuming the video lasts 2 minutes, you can increase the wait time accordingly.
- Decrease wait time:
- You can only lower your Time Limit to 5 seconds. We only recommend you do that if you can validate that a given step consistently passes under 5 seconds. Usually, you won’t need to decrease your wait time to 5 seconds.
The step interaction helps you quickly identify the type of interaction, including:
- Interaction type
- Text input
- Interacted element
Editing a step interaction
The only reason to edit a step interaction is when you want to update your test.
In rare cases, it’s possible Waldo clicked on the wrong element and could not navigate to the next step. It’s often due to the accumulation of changes un your app over time.
When your Interaction fails because of a “wrong” click, you can just fix it and save it in 2 clicks.
You can click “edit” in the Interaction section. Then click on the correct element.
It will automatically replay the test to ensure it can complete the test with this new interaction.
Use Edit Interaction to update interactions on any screen quickly.
Interactions you cannot edit
Currently, Waldo only supports the Tap gesture update from the step modal. If you need to update another type of interaction, you should click “Re-record from step” in the stop modal.
You will need to do the same to update a text input, static or randomized (input variables).
What about Analytics assertions?
Most software companies use Analytics Events to track and understand user behavior. Mobile app companies included.
Analytics are the eyes into user behavior and helps improve the experience. But they are also vulnerable to failure. In many cases, it can have a long-term negative impact. Think about it; your analytics event doesn’t trigger; your understanding of what your users are doing is skewed; you optimize your experience for the wrong thing; resources go down the drain.
There hasn’t been an easy way to test your Analytics Events trigger correctly at the right step. But in 2021, we released Analytics assertions.
How does it work?
Waldo performs end-to-end tests. Our inspector allows us to monitor logs, network requests, and analytics events triggered on each screen. If you’ve tried Waldo Live to validate your build as covered in Waldo 101, you know what we are referring to.
When you record your test, Waldo keeps a record of all the Analytics events triggered at each step in your user flow.
Assert your Analytics events
You can see which step is associated with Analytics events by clicking on the Analytics tab in the Step Modal.
Under the Analytics tab, you will find a list of all the analytics you can assert for a given step.
As with Specific and Global assertions, you can add Analytics assertions from the step modal.
Click “Add assertion,” then select ‘Analytics’ as the type.
Select one analytics from the list and save it.
Repeat the process as many times as you need.
Once saved, Waldo will validate the analytics even was triggered in the same fashion it does the other types of assertions.
On this page
- Control the expected behavior
- Waldo test configuration
- Default step configuration
- Step instructions determines replay behavior
- Everything you need to know about Waldo’s run logic
- The right time to perform an interaction
- Time limit forces Waldo to make a decision
- Assertions determine Waldo’s expectations for each step
- Assertions is the core of your Waldo test
- The button loader example
- Add an assertion on key UI elements
- Time Limit and Interactions
- Time Limit
- What "reasonable" wait time is NOT?
- What IS a “reasonable” wait time?
- When to decrease or increase a step Wait Time?
- Step interaction:
- Editing a step interaction
- Interactions you cannot edit
- What about Analytics assertions?
- How does it work?
- Assert your Analytics events