Help Center

Creating Point Allocation questions surfaces a participants’ opinions around how they’d allocate their time, dollars, or simply points for a specific purpose. This type of question allows you to tell the relative difference in importance between your ideas.

Here are our tips on how to hop in and start using Point Allocation when learning from customers.

  • Assess quantitative value or time. Point Allocation is superb for understanding how customers perceive much items cost or how much time a participant spends completing certain tasks. Knowing this kind of quantitative data can help teams understand where your customers are willing to spend their time or hard-earned money.

  • Limit your categories. Too many choices in a survey are overwhelming to participants and will skew data. We recommend limiting the choices used in Point Allocation questions to no more than 7 items. If you find that you have more than 7 items you can use a test to narrow down your choices to the most important ones: we recommend a Ranking or Multiple Choice questions to do so!

  • Keep items categorized. To ensure your data isn’t skewed, it’s important to make sure your items are different enough that they don’t overlap. If you’re asking how much money someone is willing to spend on a collared shirt versus a dress shirt, your data could be affected because of the perceived similarities of the two items.

We hope this helps you get started with creating your very own Point Allocation questions in Helio. If you need help, we’re happy to offer suggestions, just let us know!


Ranking questions asks participants to compare a list of items or features to each other by placing them in order of their preference. Ranking questions are awesome for finding out the desirability of a list of items or features.

Ranking questions have customizable labels, the scales you can create have endless possibilities.

In our example, we’ve asked participants to rank Hulu content based on how interested they are in the following categories.

Once your participants have completed their survey, you’ll see the data displayed with a 100 point scale.


Learning from customers becomes actionable when teams can be active participants in the learning.

See our example here (pssst, make a copy to use too)

To best display Landing Page Conversion data in a presentation collect your findings on the provided spreadsheet. Focus on these 3 key areas: Conversion, Emotional Reception, and the Net Promoter Score.

Conversion highlights the participants who successfully completed the primary action on your landing page.
Emotional Reception shows the reaction of participants, as well as an overall Net Positive Alignment score.
Net Positive Alignment provides a way to see the response to the experience UI.

Grab the NPS reading and drop it into the Net Promoter Score box of the framework. Don’t forget to link all your findings, and provide participant quotes to paint the full picture!


We’re so excited to hop into how you can maximize your surveys by including MaxDiff questions. In the simplest terms, the MaxDiff question type measures a customers’ preference or importance of items on a list, usually indicating the “best/worst” or “most/least”.

Start by creating a test and selecting the MaxDiff question type.

When crafting questions that ask customers to make trade-offs between a list, it’s best to ask “Which of the following is most/least important.” In the example below, we ask customers to select which is most/least important to them when planning a vacation.

Your data will be presented in an easy to read, color-coded graph. In our example “Beach access” ranked as most important in purple and “Nightlife” ranked least important in light purple.

MaxDiff is very versatile: teams from design and development to marketing and sales can utilize this gem of a question type to learn a lot from customers. Some highlights include prioritizing features or function, what marketing messaging resonates with a customer, and what a customer’s focus is when making the decision to purchase. We can’t wait for you to give MaxDiff a try!


Congratulations, you’ve run a test and have collected answers! We think that reviewing answers is a lot of fun and we’ve added the ability to LOVE an answer in Helio.

To Love an answer begin by reviewing the answers within your test. If you think the answer you get is awesome, press the “💜” within that answer.

“💜” an answer that:

  • Resonates with you, your team, or project.

  • Is thoughtful or provided a personal story that is helpful to your research.

  • Makes a great point that you want to share with your team.

  • Inspires a new idea that you would like to test further

“💜”ing an answer is a great way to collaborate with your team and to make a connection with your customers.

  • Connect to your customers. Hitting the “💜” on an answer in Helio sends an email to that responder letting them know, anonymously, that their answer was really helpful. This spreads good vibes and over time encourages your customers to keep providing thoughtful and meaningful answers.

  • Collaborate with your team. Collaboration is important when learning from customers. When sharing your tests with internal team members the “💜” surfaces the answers you find are most insightful. Your team members will also be able to spread the love by “💜”ing the answers too.

  • Filter by your favorite answers. Another added benefit of “💜”ing answers is the ability to filter by all your favorite answers and come back to them later.


Sometimes responses don’t align with your test. Perhaps there’s keyboard mashing (sdfasd), an unintelligible response (ir knt think bt) or perhaps they’re just responding with the same text for every answer (e.g, N/A, or not sure).

We provide a method to hide responses that aren’t valuable, or to flag responses that need participant review

Here’s how to flag a response:

  • Click on “Tester ID” to quickly review all responses from the participant.

  • Confirm participant’s intent to not answer the test.

  • Click the purple flag in the response box.

  • Confirm that you really want to flag this user, and choose an appropriate flag reason. (Be sure, flagging is a permanent action)

The above steps will remove the participant from the entire test and won’t show up in any .CSV export or other test reporting.

NOTE: You can resurrect responses if you accidentally hide them! Simply click the 3 dot icon near the top right of the screen, choose Hidden from the dropdown menu.

As always, don’t hesitate to reach out if you have questions on how to gauge validity of a participants response, or if you need help using these features.

Cheers!
The Helio Team


You may need to pause your test for a variety of reasons; you forgot a question, you need to fix a typo, images aren’t quite right, etc.

Test questions cannot be edited once they’ve collected responses as results will be skewed. Instead, use the 3 dot menu (…) in the upper right hand corner to copy the test. Don’t worry, you can delete the first version of the test if you wish.

Follow these easy steps:

  1. Pause the test by clicking the button next to “Test Running” at the top right.

  2. Copy the paused test. (Click the 3 dots and choose copy)

  3. Rename the new test title to whatever you like.

  4. Make the necessary updates to your test.

  5. Preview the test (preview twice, send once!)

  6. Send the test!

Pro Tip: We suggest using the “Preview Test” button every time you’re ready to send any test to ensure things are right the first time. Not only does this help identity unnecessary questions, errors or typos, but the preview link can be sent to members of your team to confirm without them actually taking the test! Amazing!

Don’t hesitate to reach out if you’ve got any additional questions!

Cheers!
The Helio Team


We currently have 5 templates available that help provide structure based on where you’re at in your process so it’s easier to hop right in. You can access these templates by selecting the down arrow right next to the “Create Test” button:

Read on to learn about when and why to use each template, how each works, and some useful recommendations.

Variation Validation

When to use: You have multiple variations of an interface, and are trying to determine which is more effective.

How it Works >2 test types, Click (with multiple variations) and Preference

  1. Click Test (Multi-variate): Click where you would expect to proceed to the checkout screen”. This helps you determine the user’s success rate and response time. Make sure that all the screens you upload have the context needed to execute the task you give.

  2. Preference: Put the same variations from the Click Test into a preference test. This helps us understand their aesthetic preference. Placing this question second prevents exposure bias that may muddy the results of the Click Test.

Why to use: When you want to determine which variation of your design is more effective and usable by a user, you want to gauge two things: their ability to take action successfully and what is easiest for them to comprehend. Multi-variate testing can give you trends in how well a user can use your designs, without the danger of them being biased by the other design, then a preference test lets the user choose their preferred layout after gaining context from the tasks they did.

Recommend: You’ll want to compare two key metrics on the multivariate click test: success percentage and response time. How did one stack up against the other? Then take the preference choice into account and make sure to ask them why they chose their preference! Quantitative data becomes much more impactful when it’s accompanied by qualitative data.

Post-Launch Survey

When to use: You’ve just launched a new product, service, or website, and you would like to get direct feedback from your users.

How it Works > 3 test types, Free Response, Multiple Choice, NPS

  1. Free Response: Upload an image of your newly completed product or site. Ask the users to interpret what they see on the page, describe what they see and what actions they can take.

    This allows you to see if users comprehend the context of the page before given any tasks or questions that might give them a hint.

  2. Multiple Choice Ask for specific feedback on the image from the previous question (Free Response). Multiple choice allows you to get structured feedback that will help you quantify the reactions you get.

  3. NPS Utilize the NPS to understand the likelihood that users will use and recommend your product or service.

Why to use: You just launched so your done now, right? NOPE. Now is the time to refine, and this template will get you started with a whole slew of new feedback to use in the next iteration.

Recommend: Apply images for direct context on your survey questions, or simply poll your audience from a contextual source. This can be a great tool when leveraging it against a customer list, or when providing a survey link on your website. (Can’t upload your customer lists? Reach out to us to learn more about it: [email protected])

Interaction Assessment

When to use: When you want to determine how visual or hierarchical changes affect your users as they use or view your designs.

How it Works >6 questions, utilizing 2 test types, Click and Preference

  1. Click Test: Upload a screen and ask users to perform a task that helps them get perspective and context. ex: you may want to ask your users to find something really obscure on the page so you know they’ll have spent some time considering the page.

  2. Preference: Upload the different versions of your views that have the changes you are trying to test for. Make sure you have specific items you are trying to test for (i.e. Color, hierarchy, menu layouts, etc.).

  3. Preference: More variations of the same view, with discrete changes.

  4. Click Test: Next, upload a new screen (not the same as before). Like before, this click test is designed to build context for the following preference questions.

  5. Preference: Upload the variations of the screen from the Click test. You should be testing the same kind of changes you tested for in the first two preference tests, just in a new screen.

  6. Preference: More variations of the same view, with discrete changes.

Why to use: By setting context, then asking for preference around a focused set of changes the feedback you receive will likewise be more focused. Testing the same kinds of changes across multiple screens helps you better understand what aspects of the changes were truly effective.

Recommend: Keep the differences between variations narrow and focused. You want your users commenting on specific changes, not sweeping revolutions.

This allows you to make informed decisions on each individual change with clear data to back you up

Layout Comprehension

When to use: When you aren’t sure whether users will understand the page correctly or lose the context of a page.

How it Works >4 questions, utilizing 2 test types, Free Response and Click

  1. Free Response: Upload a screen and ask your users to describe what they see.

  2. Click Test: First impression in hand, give the testers a directive to complete with the same screen from the Free Response question. Pay careful attention to how their initial perception differs from their response to the directive.

  3. Free Response: Upload a new screen and ask user to describe what they see.

  4. Click Test: Same as last time around, ask a directive or series of directives about the screen from the Free Response. You can repeat this combination as many times as you like.

Why to use: By asking users to describe the screen before giving them any directive, it gives them a chance to process what’s going on in isolation. This gives us a chance to audit what bias they bring with them, and also helps us make sure they understand the context better when given a specific directive later.

Recommend: Don’t use this test method if you are trying to diagnose response time. This test is especially helpful if you are attempting to evoke more thoughtful responses to your directives.

Sequence Analysis

When to use: You have multiple states of a screen, or UI based workflow and you want to gauge a user’s ability to execute actions throughout the entire flow.

How it Works >At least 3 questions, all Click Test types

  1. Click Test: Upload a screen and give the tester a flow based directive, ex: “Add ‘Super Flow Diapers’ to your cart”. This allows us to determine ability and speed to complete the first task in a flow.

  2. Click Test: Upload the next screen in the sequence and give the user another directive. NOTE: it’s best to keep these “flow” tests focused on a discrete set of tasks to get the best results. If you have interactions/flows you need to test, try creating another test.

  3. Click Test: Upload the last screen in the sequence and provide the final directive. NOTE: You can add as many click questions as required to test your flow.

Why to use: To understand how users traverse through your app/website in simple and complex scenarios. This template is especially useful for workflows.

Recommend: Don’t be narrowly focused solely on completion success, but also keep in mind how long it took users to find the desired interaction.

It will be simulated, but you can take an approximation of overall flow time from this sort of test as well.

Don’t hesitate to reach out if you’ve got any additional questions!

Cheers!
The Helio Team


Click below to see an example of how you might set up an interaction test using Helio’s hotspots and branching logic!

https://my.helio.app/report/01D1709KMS0WH0ACNBWM8G38M2


Click the link below to learn more about Helio’s Prototype Directive:

https://docs.google.com/presentation/d/1JmxaqunoHrWy6voyrDTNIQFzh72m_L7P97CXgnET4es/edit?usp=sharing