What Is The Likert Scale
The Likert Scale is a rating scale that’s often used when surveying your customers regarding their experiences with your brand – from the service they were provided to the overall effectiveness of your product.
It’s one of the most popular question types used by customers of Fieldboom (our survey software) when collecting audience feedback.
The Likert scale is a series of questions or items that ask your customers to select a rating on a scale that ranges from one extreme to another, such as “strongly agree” to “strongly disagree.”
Unlike binary “yes or no” questions, the Likert scale gives you deeper insight into what your customers are thinking and how they feel.
When To Use The Likert Scale
The Likert Scale is best used to measure and evaluate customer sentiment on a specific product, service or experience.
Likert items that center around the same topic should be grouped together in your survey, creating what’s called a “single-topic” Likert scale.
The scale itself, regardless of whether it uses numeric or text labels, should be consistent on each item; this prevents confusion for your customers and simplifies the analysis of their answers for you.
The most valuable Likert item sets include additional questions that capture open-ended feedback to tell you more about why each customer chose the answer they did.
Likert Scale “Points”
Technically, Likert scales can consist of any number of “points,” or response choices. But, for our purposes, it’s best to provide enough options for your customers to provide an accurate response – but not so many that they become overwhelmed.
Most Likert scales within customer satisfaction surveys provide either five response options:
An example 5 point Likert Scale question in a survey created with Fieldboom
Or seven points:
An example 7 point Likert Scale question in a survey created with Fieldboom
Obviously, the 7-point Likert scale allows customers to provide more accurate responses (while, again, not providing too many choices).
But there’s a more subtle difference between 5-point and 7-point Likert scales.
Notice that, in the 5-point example, the most negative response translates to the customer saying there is a 0% chance of them recommending the product. But, that doesn’t necessarily mean they’ll recommend against using the product. In other words, the extreme negative response isn’t a polar opposite of the extreme positive response; it’s simply a null set.
On the other hand, the 7-point Likert scale question does deal with polar opposites. Rather than the extreme negative response translating to “nil,” as in the 5-point scale question, the central response represents “no satisfaction.” The extreme negative response, then, represents the opposite of the extreme positive: not only is a person who responds this way not satisfied by the service, but they’re entirely dissatisfied with it.
(Note: There are actually 53 different Likert Scales you can use in your survey to capture feedback about value, relevance, frequency, quality and more. We’ve included all of the scales along with copy-and-paste answer options in this guide, which you can download instantly.)
A Quick Note On Odds & Evens
Likert scale questions can provide either an odd or even number of response options. Neither way is necessarily “better” than the other: it simply has to do with your preferences and purposes.
An odd number of choices, as illustrated above, allows respondents to report neutrality.
An example 5 point Likert Scale question that provides a neutral option, in a survey created with Fieldboom
On the one hand, there’s a chance that some customers might simply use the “neutral” choice as a way to skip the question altogether – meaning they don’t provide any valuable information regarding the question at hand.
On the other hand, neutral responses can prove to be valuable, in that they translate to the fact that your service didn’t do enough to lead your customer to have an opinion on the topic at hand.
By providing an even number of choices, you don’t allow for neutrality.
Respondents must choose a positive or negative answer. While this might lead customers who were “on the fence” to think a little deeper about a certain question, some might simply skip the question altogether.
There’s also the very real possibility that a customer truly doesn’t have an opinion regarding a certain question. If these individuals are forced to choose a side, their response could skew the overall results of the survey.
In either case, there’s a possibility of your customers responding ambiguously. In these cases, your best bet may be to provide room for your customers to expand on their answers in order to avoid discrepancies and unusable data.
Related Guides & Resources
Likert Scale Examples
A Likert Scale can be used in just about any situation where you are looking to use a rating scale to get insights into your customer behaviors and feelings however the most common Likert Scale examples include the following use cases:
Let’s take a look at each in a bit greater detail.
An Agreement Likert Scale question, in a survey created with Fieldboom
“The checkout process was straightforward”
- Strongly Agree
- Neither Agree Nor Disagree
- Strongly Disagree
An agreement scale is the most common use case for a Likert Scale. Using this format, your customers would be provided with a series of statements, for which they select Strongly Agree, Agree, Neither Agree Nor Disagree (or Neutral), Disagree or Strongly Disagree.
A Likelihood Likert Scale question, in a survey created with Fieldboom
“I would recommend this product to my friends”
- Very Likely
- Not Likely
- Very Unlikely
The likelihood version of a Likert scale is most often used to determine the probability that your customers will adopt a particular behavior, whether that behavior is buying a product or recommending a service to others.
A Satisfaction Likert Scale question, in a survey created with Fieldboom
“Please rate your satisfaction with your recent customer service experience:”
- Very Happy
- Somewhat Happy
- Not Very Happy
- Not at All Happy
This common Likert scale measures how satisfied each customer is with a particular experience, product or service. As with the example above, the satisfaction-based question is most often used to get an opinion from customers about your service or support.
An Importance Likert Scale question, in a survey created with Fieldboom
“Rank each item in reference to its importance to you:”
- Very Important, Important
- Moderately Important
- Slightly Important
- Not Important
The importance scale provides deeper insight into reasons behind more general opinions. It shows how strongly your customers rank the influence of various factors for an experience, product or service.
These are just a few examples of the Likert scales that can be used when surveying your customers. Want to see more examples? Check out our free guide, The 53 Likert Scales With Copy-And-Paste Answer Choices to help you get started.
How To Report On The Likert Scale
Though analysis of Likert scale data can be quite the scientific and mathematical undertaking (especially in determining validity, distortions, etc.), in this section we’ll discuss a simple way in which to interpret the data you’ve collected, which is determining the percentage of your customers that respond a certain way to each individual question.
Most commonly, Likert scales are evaluated by giving each option a value and then adding these values together to create a score for every customer.
Though relatively simple, this reporting method makes it easy to evaluate the opinions revealed by each Likert option. A chart of scores can offer visual insight into sentiment on a particular Likert scale.
By doing this for each question, you’ll be able to determine areas in need of improvement, as well as areas in which your company is thriving.
You also might notice areas of concern where you initially might have thought things were running smoothly. In the example above, though most respondents reported at least an above-average level of satisfaction, there are still those who reported a below average or poor experience. In this case, it’d be worth digging into how these customers responded to other survey questions to get a better idea of what went wrong in their experience.
The most important factors in reporting on the Likert scale are consistency in values and cohesiveness in questions or items that are evaluated together. Questions that are out of place can skew the results, making it harder to take the right actions based on the answers your customers give you.
Speaking of skewed results…
Possible Survey Distortions
Going along with the last notion, the answers your customers provide when completing satisfaction surveys might not always be entirely accurate.
Simply put: human nature sometimes gets in the way of customers responding openly and honestly.
The most efficient way to combat these distortions is to always give your respondents the option of providing more detail – or discussing confusion – regarding a specific question or set of questions.
However, it’s still important to understand the possible biases your customers come to the table with in order to identify possible distortions or discrepancies among your data. The biases you’re most likely to encounter are:
Central Tendency Bias
As the name implies, central tendency bias refers to the notion that some respondents may avoid choosing the most extreme options provided.
The most common explanation for this tendency is that respondents don’t have a clear definition of the extreme high or extreme low with regard to a specific question.
For example, when responding to the question “How would you rate our company’s customer service?” (with responses ranging from “Unhelpful” to “Extremely helpful”), a customer who did receive exemplary customer service might get caught up in the semantics of what “extreme” actually means. It’s possible that, although they acknowledge the service was exquisite, they are hesitant to report that it was the be-all-end-all of customer service.
Another explanation for central tendency bias is that customers might initially “save up” their “extreme” answers for later questions. If they answer the first question with an “extreme” answer, they might view the rest of their answers through the lens of this first answer. (i.e., to respond with an “extreme” answer to subsequent questions, their level of satisfaction will have to match the level of satisfaction regarding that first question).
In addition to providing opportunities for respondents to expand on their answers, you can also avoid falling victim to central tendency bias by either providing context for what certain terms (such as “excellent”) mean, or allowing respondents to define the terms in their own words.
Extreme Response Bias
In contrast to central tendency bias, extreme response bias is the tendency for some respondents to only answer in extremes.
There are, again, a number of reasons this might occur, including:
- Cultural attitudes
- Intelligence level of respondents
- Level of effort respondents put into completing survey
- The way in which questions and choices are worded
Of these four reasons, the only one you truly have control over is the last one. Ensure the questions you ask don’t lead respondents toward a certain answer, and also that each option is clearly defined and understandable.
You can address the issues of cultural and intellectual diversity by soliciting demographic and other personal information from each respondent. While not an absolute determinant by any stretch, it’s possibly the closest you can get to figuring out why a respondent answered as they did (short of asking them, of course).
Regarding the amount of effort they put into completing the survey, you might ask them to report the amount of time it took them to complete it. This could help you determine if they put serious thought into their answers, or if they simply used a “black or white” approach, ignoring all “in-between” answers.
Acquiescence bias refers to a respondent’s tendency to go along with a statement in an effort to avoid ruffling feathers or insulting anyone.
For example, say a customer received subpar service from an employee who truly did their best to meet the customer’s needs. Though the customer didn’t end up with the result they desired, they might report they received “excellent” customer service from said employee simply because they tried to be helpful. But, for the purposes of the survey, such a response isn’t helpful at all.
To avoid such a discrepancy, ask clear and specific questions throughout your survey. Using the above example, if the customer was asked about the employee’s willingness to help (in addition to being asked about the service they received), they would have a chance to show that the employee tried to help, but ultimately wasn’t able to.
Another way to possibly avoid acquiescence bias is to make clear to respondents that the purpose of the survey in the first place is to improve customer service across the board. Though there will certainly still be cases of such bias, this simple disclaimer might open the door for more honesty from many of your customers.
Likert scale questionnaires can help you gain valuable insight regarding your customers’ experience and levels of satisfaction with different aspects of your company. In turn, you’ll be able to improve areas in which you fall short of your customers’ expectations – and strengthen the areas in which you’re already doing quite well.
Ellen is a wordsmith for Fieldboom. She has an obsession with helping small businesses and a passion for social media marketing.
The types of survey questions used in a survey will play a role in producing unbiased or relevant survey responses. As the survey designer, consider the types of questions to use and when it is appropriate to use them. Question types range from open-ended (comments to essays) to closed-ended (yes/no, multiple choice, rating scale, etc). In the end, it is the question types that determine what type of information is collected.
1. Open-Ended Types
Open-ended questions are those that allow respondents to answer in their own words. In an online survey, textboxes are provided with the question prompt in order for respondents to type in their answer. Open-ended questions seek a free response and aim to determine what is at the tip of the respondent’s mind. These are good to use when asking for attitude or feelings, likes and dislikes, memory recall, opinions, or additional comments. However, there can be some drawbacks to using open-ended questions:
- Sometimes respondents may find it difficult to express their feelings. This can result with respondents answering “I don‟t know” or skipping it.
- They do take more time and effort to fill out and at times they can have a larger skip rate.
- In addition, analyzing open-ended comments can be time consuming and difficult. We have aimed to make that process a bit easier for Professional subscribers by offering a few summary spreadsheet formats in Excel, HTML, or downloading individual questions into a PDF.
2. Closed–Ended Types (Multiple Choice – One Answer or Multiple Answers)
Closed-ended questions are those with pre-designed answers with a small or large set of potential choices. One type of closed-ended question is a “dichotomous” question which allows respondents to choose one of two answer choices (e.g. Yes or No), while another type is the “multi-chotomous” question, which allows respondents to choose one of many answer choices.
3. Ranked or Ordinal Questions
Ranking questions are best to use when all the choices listed should be ranked according to a level of specification (e.g. level of importance). If you have a question in which you need the respondents to indicate what items are the “most important” to “least important” then you can set up a ranking question.
4. Matrix & Rating Types
The matrix & rating type questions are used when surveying the frequency of something like behavior or attitude. It is best to present the rating scale in a logical or consistent order. Therefore, it makes sense to order the ranking or rating choices from low to high (e.g. Strongly Disagree to Strongly Agree going from left to right).
If you set up the rating scale in your survey in this format of “Strongly Disagree” to “Strongly Agree” make sure that the rest of the survey is consistent and all rating scales go from the low to the high frequency throughout (or vice versa). In addition, some surveys may only label the outliers or endpoints of the scale, but it is good practice to assign a label or number to each rating scale.
Please note: Some studies have shown that whether verbal descriptors are used only at endpoints or at every scale point, it may affect the distribution of the data collected either way.
The two common types of matrix-rating scales are called likert and semantic differential scales. Rating scales are popular ways of collecting subjective data where you want to measure a respondent’s ideas (e.g. opinions, knowledge, or feelings). When creating rating scales, likert scales in particular, consider if you want the scales to be balanced or unbalanced. The following sections discuss these two scales and the difference between balanced vs. unbalanced scales.
a. Likert Scales
A likert scale is considered an “agree – disagree” scale. This setup gives respondents a series of attitude dimensions. For each dimension, the respondent is asked whether, and how strongly, they agree or disagree to each dimension using a point rating scale. Likert scales are given scores or assigned a weight to each scale, usually from 1 to 5. The purpose of the likert scale is to sum the scores for each respondent (the response average), and the intent of the likert is in that the statement will represent different aspects of the same attitude.
b. Semantic Differential Scales
The semantic differential scale is one that has opposite ends of the scale marked with two different or opposing statements. Respondents are then asked to indicate the area in which they fall on the scale. Unlike the likert scale, the semantic types do not have to have a “statement” that is semantically identified for each rating along the scale. It is typically recommended to use a seven–point scale for these types. It is also good to keep the statements on the opposite ends short and precise.
5. Balanced vs. Unbalanced-Rating Scales
A five-point rating scale typically gives sufficient discrimination and is easily understood by survey participants. This is usually recommended for most survey settings. However, there is no set limit on the number of categories to use. Using too few could give less-cultivated information, while using too many could make the question hard to read and answer. Related to this setup is the decision of incorporating a “middle category”. The content and analytical purpose of the question will determine if you want to create a balanced vs. an unbalanced rating scale. If a rating scale is balanced, it means it is composed of an equal number of positive and negative labels anchored by opposite poles, with or without midpoints.
There are some occasions in which an unbalanced scale is suitable. For example, in a customer satisfaction survey, few customers may say that something is “unimportant.” In the example scale below, the “important” will become the midpoint. In this scenario, you are trying to obtain a degree of discrimination between the “levels of importance”:
- Not Important
- Neither Important nor Unimportant
- Very Important
- Extremely Important
Here is where you decide if you want to provide a “neutral” middle category to your scale. If a neutral choice is a possibility, then you may want to include a midpoint answer choice. However, if you want the respondent to take one side over the other, then an even number of categories is suggested. This will force respondents away from the neutral response. Some people agree that it is best to force the respondents in one direction or the other. If you choose the unbalanced form and force respondents away from the neutral alternative, then as the researcher be careful that this will not introduce bias into the data.
A good survey design should help to stimulate recall (if necessary); it should motivate the respondent to reply; and the survey should flow in an orderly fashion. The sequence of questions will help to create a certain flow to the survey. This flow will also ease or arouse the respondent’s interest and overcome his/her doubts about the survey’s intent. As a general guideline, there are three areas regarding question sequence: opening questions, question flow, and location of sensitive questions.
1. Opening questions – The first few questions in the survey should be easy and interesting in order to calm any participants’ suspicions about the survey’s integrity. This allows the participants to build up confidence in the survey’s objective. In return, this may stimulate their interest and overall participation.
2. Question flow – The question sequence in the survey body should take on a flow of ideas and be geared towards the respondents’ abilities. After you have established the first general topic, all related questions should come up before a second topic is raised. It is a good idea to use “pages” in the online design to house each different section of the survey. Here you can raise one topic on one page and include the instructions/information for this section in the Page Description area. When you are then ready to introduce a new topic to the survey, you can create a new or second page to include that page’s description and purpose. Conditional or Skip Logic questions are also a good way to control the respondent’s flow or route through the survey. You can apply question or page skip logic to the survey when you want to guide respondents and exclude them from certain pages of questions that do not apply to them.
3. Location of sensitive questions – Some suggest that sensitive questions should not be included at the beginning of the survey. However, there are no set rules on this. If you do include sensitive questions at the beginning of the survey, then you may run into respondents rejecting the survey and exiting early. They may not have built up confidence yet in the survey’s integrity quite so early. Questions like demographics or personal information are usually best to introduce towards the end of the survey. This way, respondents are likely to have already developed confidence in the survey’s objective.
1. Basic guidelines
When designing your survey structure, the overall format and layout is important from beginning to end. A poorly organized survey may cause respondents to skip questions or completely opt out of answering your survey. It is good practice to begin your survey with an introduction that explains the survey’s purpose. Within the introduction, you may want to include the name of the organization conducting the survey, the confidentiality information, and how the data collected will be used. Many participants like some kind of assurance in regards to their responses; providing that kind of information before the survey starts can help ease those concerns. You may also want to provide an estimate of how long the survey might take or whether you are offering any kind of incentive or prize for taking the survey. Remember to deliver on your promised gift! If you provide this information up front it usually leads to honest responses and more completed surveys.
Providing general instructions on how to progress through the survey in the introduction or within each new section is important in letting your audience know how the survey works. From here respondents will not have to look back and forth in the survey to see what they are supposed to do.
b. Body of the Survey Design
The use of space throughout the survey is also important. Trying to fit too much information (e.g. too many questions) on a single page may cause respondents to struggle through the survey. If your survey has multiple sections or parts, then it is good to introduce each new section as suggested previously. Keep in mind to make the sections and questions flow in a sequential order that makes sense to the respondents.
Here are some tips to remember when designing the look of your online survey:
1. Make the survey visually appealing and user-friendly.
2. Try not to use small fonts or fonts that are not easy to read. Some participants may have a difficult time reading small print.
3. To avoid clutter, use white space.
4. Ask only one question per line. If it makes sense you can place questions side by side using our tool.
5. Group similar question together or in the same area of the survey.
6. Ask interesting questions in the beginning of the survey to grab the participants‟ attention. This helps to stimulate interest.
7. Place demographic and/or sensitive questions at the end of the survey. If they are in the beginning, participants may opt out early.
8. Finally, test the survey before going live. A small sample of test respondents can help verify if your survey is working properly. This enables you to revise and edit questions and the survey design.
c. End of Survey or Thank You Page
Once your respondent has reached the end of your survey, you can create a Thank You page. Here you can thank the respondent for their time. Also, let them know that once they click the “Done” or “Submit” button, their survey response will be submitted. This may help to build rapport with the respondent; possibly increasing the likelihood that they will participate in your future survey invites.
2. Layout for coding and identification
As the designer of the survey, pay attention to the physical layout of the survey to reduce the likelihood of errors on the respondents‟ end or on your end regarding areas of coding or editing. Here are some principles to follow to make the survey logical for all people accessing the survey, as well as easy to identify, code, and store:
1. Identification – You can add a unique number or identifier to each questionnaire.
2. Numbering – Questions can be numbered sequentially throughout the survey, even if the survey is divided by pages or sections. You can choose to have our tool number the questions throughout the entire survey as a whole or have the questions numbered according to each individual page. This may help you in coding your survey.
3. Instructions – General instructions are important for the administration of the survey as well as for the collection of accurate data. The following two types of information ought to be distinguishable in the survey: questions to be read / answered and instructions to be followed. You may want to customize your survey to include different fonts for the instructions or page descriptions vs. the survey questions themselves. Place any special instructions on either the page description/section or directly above the question itself.
4. Fonts & Formats – If you want to emphasize important words, then underline or bold them in the survey question or page description. This makes it easier for your respondents to identify key points or items. You will need to incorporate HTML code into the survey design to emphasize certain words or phrases. We are only able to offer limited support in your own HTML coding, so knowledge of basic HTML is necessary.
The pre-test or test pilot of the survey provides two functions. It first serves as the initial “live” test of the survey, and secondly it is the last step in finalizing the survey questions and form. The pre-test is possibly one of the most critical steps in administering a survey. By opting out of conducting a test pilot, you could jeopardize the accuracy of the data. The test pilot offers feedback on whether the survey’s wording and clarity is apparent to all survey respondents and whether the questions mean the same thing to all respondents.
The three basic goals of the pre-test are:
1. To evaluate the competency of the questionnaire.
2. To estimate the length of the survey or time to take the survey.
3. To determine the quality of the surveyor.