The five-person web design team for a large corporation nervously shifted in their leather chairs around the way-too-large mahogany table, ...
The five-person web design team for a large corporation nervously shifted in their leather chairs around the way-too-large mahogany table, while waiting for their newly appointed executive to arrive. They had reason to be nervous. Their new executive, three levels up from them, had requested the meeting because she wanted to review the new design system they had been working on for several months, in preparation for the upcoming website redesign for the company.
The new executive walked in, greeted everyone, and sat at the end of the long table. The manager of the design team had prepared for days for this moment, but nothing could have prepared him for what would follow.
The manager launched into his meticulously prepared presentation (designers always have great-looking slides), opening with the mock-up of the website’s redesigned home page, which highlighted the new, more muted gray color scheme that was the centerpiece of the new design concept.
“How did you choose these colors?” the executive interjected.
The veteran design manager was accustomed to executives questioning color choices. He believed that all execs fancy themselves as having good taste, and color is the one area of design that they can discuss without risking looking foolish. He confidently shifted to one of his backup slides, prepared for just this question, showing how gray tones were trendy in the design community and that two other tony corporations with the same type of clientele had recently shifted to similar color schemes.
“But what else did you test?” the executive pressed on. “How can you be sure that this change will improve our sales? I need to be confident of that before we can make this change to our entire global website.”
To describe the resulting look on the face of the design manager as resembling “a deer in the headlights” is unfair to all those deer who appear far more confident when facing down a truck.
As the conversation continued, it was clear that not only was there no plan to limit the downside risks of this radical visual change to the entire website, but no one even knew what the compelling reason was to make any change at all. There was no evidence that the current website had any visual problem, except that this company always conducted a redesign every three years, so they felt that the website looked “dated.”
Soon, the massive redesign project was canceled, replaced by a long series of tests in small-market countries of several new designs along with a wave of other user experience enhancements that were based on specific data showing where customers were having difficulty. Two years later the company redesigned its global website on the basis of the results of these tests, but with virtually the original color scheme they started with and delivered a marked increase in sales.
This executive transformed the way her teams worked to a new process for improving user experience basics one that actually puts the user at the center of the process.
we’ll unpack this process step by step:
Finding the problems
Fixing the problems
Propagating the fixes
We can use data to find problems, experiment with fixes on a small scale until something is working, and then reuse those ideas to solve similar problems across a site. But before we launch into the new process, we want to review the first principles of user experience (UX).
UX is a great way to end this book because it is a field increasingly awash in data. In Figure 6-1, we illustrate a typical user scenario, showing how it can end well (or badly) for the company. You can imagine how much data can be collected along the way to reveal both ultimate success and failure as well as the seeds of those outcomes.
This simple diagram describes the central use case of outside-in how to do content marketing. Emily starts with some type of “drive-to-web” stimulus, such as clicking on a search result. When Emily lands on your page, she spends (on average) six seconds deciding whether your content is relevant to the stimulus that brought her there. In the case of search, she scans the page for visual cues that make the page relevant, such as finding her keywords in the headings, body copy, and pull quotes. If your experience doesn’t clearly and quickly demonstrate relevance, she will go somewhere else, or she will “pogo stick” back to the search results. Emily’s first scan is as automatic as breathing—she’s not consciously thinking. (That’s why one of our favorite books on designing good user experiences is Don’t Make Me Think by Stephen Krug.)
Once she determines that the page is relevant, she will start reading, scrolling, and clicking. When she clicks the call to action for the ultimate content strategy asset you provide video, podcast, case study—it needs to deliver on the promise of the page. The whole purpose of the page is to convince Emily that it has what she is looking for. She is looking for an asset that will answer the question implicit in her query. The more assets you deliver, the less likely she will find the right one. So keep it simple, with no more than three assets to a page.
As simple as this diagram might appear, it is a radical concept for many digital marketing organizations. Why? Because they are often more focused on their own content marketing goals than on user goals. Their designs are often unsuccessful because they’re not focused on meeting user goals but on the goals of their organizations. For example, they’re focused on getting registrations for a particular asset when that is not the asset users are looking for.
In our experience, designs that are focused on user goals are easy to spot. They all have three key features:
Some studies (e.g., Jakob Nielsen’s seminal study, “Why Web Users Scan Instead of Reading”) that were conducted to prove that users don’t read on the web had two crucial flaws: They used designs that made reading difficult and text that gave users little incentive to read. Still, these studies are often cited (many years later) in web design training and literature. Hence, we need to let go of the idea that users do not read web pages.
The primary reason for text on a page is to help the user understand the context. They are coming to your unfamiliar design from the familiarity of search results pages. They are often dropping into the middle of your experience without having experienced the carefully crafted introductory pages. They need to know that the page is worth their time. Text is what many people use to determine that. Even in the case of a page that contains non-text assets, the first decision users face when they land on your page is whether it is worth their time to consume the asset (e.g., play a video, listen to a podcast). At minimum, the text needs to persuade them to.
With that background, we are now ready to tackle the three parts of the UX improvement process, starting with how we actually find the problems in the UX.
The major indicators of UX bottlenecks are content with
The new executive walked in, greeted everyone, and sat at the end of the long table. The manager of the design team had prepared for days for this moment, but nothing could have prepared him for what would follow.
The manager launched into his meticulously prepared presentation (designers always have great-looking slides), opening with the mock-up of the website’s redesigned home page, which highlighted the new, more muted gray color scheme that was the centerpiece of the new design concept.
“How did you choose these colors?” the executive interjected.
The veteran design manager was accustomed to executives questioning color choices. He believed that all execs fancy themselves as having good taste, and color is the one area of design that they can discuss without risking looking foolish. He confidently shifted to one of his backup slides, prepared for just this question, showing how gray tones were trendy in the design community and that two other tony corporations with the same type of clientele had recently shifted to similar color schemes.
“But what else did you test?” the executive pressed on. “How can you be sure that this change will improve our sales? I need to be confident of that before we can make this change to our entire global website.”
To describe the resulting look on the face of the design manager as resembling “a deer in the headlights” is unfair to all those deer who appear far more confident when facing down a truck.
As the conversation continued, it was clear that not only was there no plan to limit the downside risks of this radical visual change to the entire website, but no one even knew what the compelling reason was to make any change at all. There was no evidence that the current website had any visual problem, except that this company always conducted a redesign every three years, so they felt that the website looked “dated.”
Soon, the massive redesign project was canceled, replaced by a long series of tests in small-market countries of several new designs along with a wave of other user experience enhancements that were based on specific data showing where customers were having difficulty. Two years later the company redesigned its global website on the basis of the results of these tests, but with virtually the original color scheme they started with and delivered a marked increase in sales.
This executive transformed the way her teams worked to a new process for improving user experience basics one that actually puts the user at the center of the process.
we’ll unpack this process step by step:
Finding the problems
Fixing the problems
Propagating the fixes
We can use data to find problems, experiment with fixes on a small scale until something is working, and then reuse those ideas to solve similar problems across a site. But before we launch into the new process, we want to review the first principles of user experience (UX).
User Experience Basics
User experience is the original outside-in marketing practice because the best UX design comes from the user’s perspective. This means learning users’ needs and being laser-focused on serving them in the simplest, most elegant way. UX combines everything that goes into the interaction with your online customer. (Your company might include offline customer experience, too, but that’s not our area of expertise.) We’re admittedly lumping arguably separate practices such as information architecture and web design into user experience because we believe that they should all be done using the same orchestrated process.UX is a great way to end this book because it is a field increasingly awash in data. In Figure 6-1, we illustrate a typical user scenario, showing how it can end well (or badly) for the company. You can imagine how much data can be collected along the way to reveal both ultimate success and failure as well as the seeds of those outcomes.
This simple diagram describes the central use case of outside-in how to do content marketing. Emily starts with some type of “drive-to-web” stimulus, such as clicking on a search result. When Emily lands on your page, she spends (on average) six seconds deciding whether your content is relevant to the stimulus that brought her there. In the case of search, she scans the page for visual cues that make the page relevant, such as finding her keywords in the headings, body copy, and pull quotes. If your experience doesn’t clearly and quickly demonstrate relevance, she will go somewhere else, or she will “pogo stick” back to the search results. Emily’s first scan is as automatic as breathing—she’s not consciously thinking. (That’s why one of our favorite books on designing good user experiences is Don’t Make Me Think by Stephen Krug.)
Once she determines that the page is relevant, she will start reading, scrolling, and clicking. When she clicks the call to action for the ultimate content strategy asset you provide video, podcast, case study—it needs to deliver on the promise of the page. The whole purpose of the page is to convince Emily that it has what she is looking for. She is looking for an asset that will answer the question implicit in her query. The more assets you deliver, the less likely she will find the right one. So keep it simple, with no more than three assets to a page.
As simple as this diagram might appear, it is a radical concept for many digital marketing organizations. Why? Because they are often more focused on their own content marketing goals than on user goals. Their designs are often unsuccessful because they’re not focused on meeting user goals but on the goals of their organizations. For example, they’re focused on getting registrations for a particular asset when that is not the asset users are looking for.
In our experience, designs that are focused on user goals are easy to spot. They all have three key features:
Simple
A page should have a single purpose. If it does, it should be easy to serve that purpose in the simplest way. Teams that struggle to create simple designs typically start with pages that are trying to do too much. Inventions like carousels and other dynamic designs signal that you should create multiple experiences rather than trying to serve multiple purposes with one experience. So keep it simple. Use keyword data to deliver the right asset, which will gain trust and lead to deeper engagement with your content and your offerings.Clean
Pages should not be cluttered with irrelevant graphics or stock photos. Graphics should serve a purpose—not merely to break up gray text but to help visitors determine at a glance that the page is relevant and to take relevant action. You should, however, have a featured image that makes the page more attractive when shared in social media.Textual
Many designers we have worked with have said that “visitors don’t read on the web, so they’re not looking for text.” First of all, users do scan the text for words that are relevant to the search keywords that led them to the page. Second, when users determine that the page is relevant, they do read text—as long as it helps them answer their questions or solve their problems.Some studies (e.g., Jakob Nielsen’s seminal study, “Why Web Users Scan Instead of Reading”) that were conducted to prove that users don’t read on the web had two crucial flaws: They used designs that made reading difficult and text that gave users little incentive to read. Still, these studies are often cited (many years later) in web design training and literature. Hence, we need to let go of the idea that users do not read web pages.
The primary reason for text on a page is to help the user understand the context. They are coming to your unfamiliar design from the familiarity of search results pages. They are often dropping into the middle of your experience without having experienced the carefully crafted introductory pages. They need to know that the page is worth their time. Text is what many people use to determine that. Even in the case of a page that contains non-text assets, the first decision users face when they land on your page is whether it is worth their time to consume the asset (e.g., play a video, listen to a podcast). At minimum, the text needs to persuade them to.
With that background, we are now ready to tackle the three parts of the UX improvement process, starting with how we actually find the problems in the UX.
Finding the Problems
The easiest way to identify experiences that need to be upgraded is to use data. You must use several data points to identify problems with your UX because Emily can get frustrated at any point in her journey. Identifying her frustrations and eliminating them is the point of UX upgrades.The major indicators of UX bottlenecks are content with
- High bounce rates
- Low engagement rates
- Low conversion rates
- Low advocacy rates
High Bounce Rates
Recall that a bounce is registered each time someone comes to your page but then immediately abandons your site. Good bounce rates are in the 30% range. Bad bounce rates are in the 80% range. If you put all your top-ranking pages on a chart and sort them by bounce rate, the resulting spreadsheet can serve as a backlog of pages to upgrade.
Content marketing can include many ways for visitors to discover your content, but we’ll concentrate on search because it is the most important. If your page has good search visibility and you get decent click numbers, but visitors bounce at a high rate, it’s a good indication that you are not doing enough to demonstrate that the page is relevant to Emily’s search keyword. If your bounce rates are over 50%, the page needs work. When a page has high bounce rates, you’re forcing users like Emily to work too hard to understand the relevance of the page to the search query they entered into Google.
Users scan pages listed in Google’s search results before deciding which pages are worth their time and attention. Users do this automatically, like breathing. They don’t commit to reading, scrolling, or clicking until this automatic process takes place. So the goal of search-friendly UX is to demonstrate relevance without making users think. And this needs to be done for all kinds of devices—tablets, PCs, and especially phones.
The simple fix is to make sure that the main heading and the first part of the body copy are the first things users see, and they both need to have the keyword in a place where the scanning eye can recognize it. It goes without saying that this needs to be above the “fold”—meaning the first pane of content that a user sees, regardless of the device.
Content marketing can include many ways for visitors to discover your content, but we’ll concentrate on search because it is the most important. If your page has good search visibility and you get decent click numbers, but visitors bounce at a high rate, it’s a good indication that you are not doing enough to demonstrate that the page is relevant to Emily’s search keyword. If your bounce rates are over 50%, the page needs work. When a page has high bounce rates, you’re forcing users like Emily to work too hard to understand the relevance of the page to the search query they entered into Google.
Users scan pages listed in Google’s search results before deciding which pages are worth their time and attention. Users do this automatically, like breathing. They don’t commit to reading, scrolling, or clicking until this automatic process takes place. So the goal of search-friendly UX is to demonstrate relevance without making users think. And this needs to be done for all kinds of devices—tablets, PCs, and especially phones.
The simple fix is to make sure that the main heading and the first part of the body copy are the first things users see, and they both need to have the keyword in a place where the scanning eye can recognize it. It goes without saying that this needs to be above the “fold”—meaning the first pane of content that a user sees, regardless of the device.
Low Engagement Rates
Engagement is perhaps the fuzziest kind of measurement our clients use not because you can’t define it precisely and measure it scientifically, but because people tend to define it in many different ways. If different people define it differently in the same marketing organization, they could be failing to communicate about the relative effectiveness of their content marketing. So the most important thing is to standardize on a definition that works for your organization and stick with it.
All definitions of engagement start with the absence of bounce. If someone bounces, they are, by definition, not engaging with the page. But if they don’t bounce, what do they do? Typically, they read, scroll, and click, in that order. The best engagement metrics include these three elements.
The main problem with including these three elements is that they are not all easy to measure. Reading, in particular, is almost impossible to measure. A proxy for reading that is often used is the time on page metric. But time on page is a really noisy number for today’s browsers and devices. Users can start to read something, get a phone call, check their email, and then return to the article. There is no way of knowing that they were actually reading the whole time they were on the page.
Scrolling is easier to measure. You can use heat mapping software, which measures how users move their mice, where they click, and the degree to which they scroll. Heat mapping software can even help you measure reading if you have body copy that users have to scroll to complete. We do not recommend forcing users to scroll in order to read at least some of the body copy, but if they can get the essentials without scrolling and then they scroll for more detailed information, it is a good indicator that they are reading the content. On mobile devices, designers are forced to give users a taste of what’s to come, so scrolling is an even better indicator of reading.
If you see low scrolling or clicking rates for those who do not bounce, it is an indicator that the content is not compelling. It might be relevant, but perhaps it is redundant or otherwise lacking in presentation. One common mistake writers make is to write for the web as though they are writing for print. Scanning is common to both print and the web. But print readers tend to read more uninterrupted text. On the web, readers tend to prefer shorter sentences, shorter paragraphs, more bullets smaller chunks of information, called “snackable” text.
If your scrolling and clicking rates are low, try removing needless words, breaking down longer sentences into shorter ones, creating bulleted lists, and so on. Your heat mapping software will tell you where you need to try these things.
The other typical root cause of poor engagement is a lack of compelling images or calls to action. Using infographics is a particularly good way to increase engagement rates because they combine snackable text with explanatory images in much more compelling ways than the typical stock photos. Videos are another form of particularly compelling calls to action. They don’t force a long commitment but rather break up the information task into multimedia experiences.
The crux of engagement is giving users what they need to complete their information task. If you do it in an accessible, clean, and clear way, they will stay on your page and scroll for more. If you overwhelm them with too much information that does not help them with the particular information task implicit in their keyword, you will lose them before you are able to convert them. Each abandonment experience might create a potential adversary, who might just amplify the negative experiences in social circles. The stakes are high to create experiences your target audiences want to engage with.
All definitions of engagement start with the absence of bounce. If someone bounces, they are, by definition, not engaging with the page. But if they don’t bounce, what do they do? Typically, they read, scroll, and click, in that order. The best engagement metrics include these three elements.
The main problem with including these three elements is that they are not all easy to measure. Reading, in particular, is almost impossible to measure. A proxy for reading that is often used is the time on page metric. But time on page is a really noisy number for today’s browsers and devices. Users can start to read something, get a phone call, check their email, and then return to the article. There is no way of knowing that they were actually reading the whole time they were on the page.
Scrolling is easier to measure. You can use heat mapping software, which measures how users move their mice, where they click, and the degree to which they scroll. Heat mapping software can even help you measure reading if you have body copy that users have to scroll to complete. We do not recommend forcing users to scroll in order to read at least some of the body copy, but if they can get the essentials without scrolling and then they scroll for more detailed information, it is a good indicator that they are reading the content. On mobile devices, designers are forced to give users a taste of what’s to come, so scrolling is an even better indicator of reading.
If you see low scrolling or clicking rates for those who do not bounce, it is an indicator that the content is not compelling. It might be relevant, but perhaps it is redundant or otherwise lacking in presentation. One common mistake writers make is to write for the web as though they are writing for print. Scanning is common to both print and the web. But print readers tend to read more uninterrupted text. On the web, readers tend to prefer shorter sentences, shorter paragraphs, more bullets smaller chunks of information, called “snackable” text.
If your scrolling and clicking rates are low, try removing needless words, breaking down longer sentences into shorter ones, creating bulleted lists, and so on. Your heat mapping software will tell you where you need to try these things.
The other typical root cause of poor engagement is a lack of compelling images or calls to action. Using infographics is a particularly good way to increase engagement rates because they combine snackable text with explanatory images in much more compelling ways than the typical stock photos. Videos are another form of particularly compelling calls to action. They don’t force a long commitment but rather break up the information task into multimedia experiences.
The crux of engagement is giving users what they need to complete their information task. If you do it in an accessible, clean, and clear way, they will stay on your page and scroll for more. If you overwhelm them with too much information that does not help them with the particular information task implicit in their keyword, you will lose them before you are able to convert them. Each abandonment experience might create a potential adversary, who might just amplify the negative experiences in social circles. The stakes are high to create experiences your target audiences want to engage with.
Low Conversion Rates
For each user experience, an asset must enable the user to take a deeper dive into the content. Perhaps it’s a client case study or a demo video or even a white paper. When users click these experiences, marketers like to call it a conversion. Conversions are so important to marketers that they will do everything they can to capture the users who convert on these experiences, even to the point of forcing users to fill out long surveys that require entry of personal information.
Whether or not it is a good experience to gate content in this way (Hint: It’s often not.), enticing the user to opt into experiences and capturing some of their user information is a definitional aspect of conversion. It demonstrates a level of commitment that implies that the user is ready to be contacted. It is rightly a critical goal of content marketing.
But marketers who do not give users an equal-value exchange for the time and effort it takes to fill out a form are thwarting their own efforts. The central problem is, as a user, how do I know that the content I am forced to register for will give me equal value for my time and attention? That uncertainty causes a lot of abandonment. When you see a lot of abandonment, your first response should be to take the registration form out of the equation.
How then do you measure conversion if you don’t have a registration form in front of your content? More importantly, how can you get the business results you need if you don’t get names and contact information from your users? There is a very simple answer: Launch a registration form at the end of the asset. If they think the content is worth giving their personal information after they have consumed it, this is a vote of confidence for the content. Also, it is a better test of not only the content itself but the placement of the asset in the buyer’s journey. If they have consumed the content and they still don’t want to give you their personal information, they’re not ready to become your client. They need more information to get to the point of conversion. In that case, it’s better not to waste the time of your sales force in contacting them.
This is what testing conversion rates is all about: where and when to present your best content to the audience so that they willingly accept the terms and conditions of a client relationship. If an asset is getting a lot of clicks but the form at the end is typically left blank, take the form away and simply give users a way to dive deeper into other assets, which in turn could have registration forms at the end.
Understand that merely counting conversions might not make sense, especially with multistop buying cycles that take time to unfold. The best thing that content in the first step of your process can do is to get the visitor to the next step, so it makes no sense to test top-of-funnel content against conversions. Treating the transition from step 1 to step 2 as a “microconversion” gives you a better way to test the content from step 1.
If you don’t get conversions, you should at least tag and track the user so that you can remarket to her later. Tagging and tracking technology places a cookie on the user’s device, which is like a digital fingerprint that cues into the user’s behavior without gathering any personally identifiable information. When the same user returns, you can offer her another asset that is the next logical step in her journey, based on the data you have collected about typical client journeys.
Another way to optimize conversion is to test different assets. For example, videos work particularly well in the early steps of client journeys because they can be coded with annotations to drive clicks to the next logical step. If you use videos later in the journey (perhaps as how-to demos), you can offer an annotation at the end that loads a registration form or a purchase experience. There’s no end to the variables you can test in order to improve your conversion experiences.
Whether or not it is a good experience to gate content in this way (Hint: It’s often not.), enticing the user to opt into experiences and capturing some of their user information is a definitional aspect of conversion. It demonstrates a level of commitment that implies that the user is ready to be contacted. It is rightly a critical goal of content marketing.
But marketers who do not give users an equal-value exchange for the time and effort it takes to fill out a form are thwarting their own efforts. The central problem is, as a user, how do I know that the content I am forced to register for will give me equal value for my time and attention? That uncertainty causes a lot of abandonment. When you see a lot of abandonment, your first response should be to take the registration form out of the equation.
How then do you measure conversion if you don’t have a registration form in front of your content? More importantly, how can you get the business results you need if you don’t get names and contact information from your users? There is a very simple answer: Launch a registration form at the end of the asset. If they think the content is worth giving their personal information after they have consumed it, this is a vote of confidence for the content. Also, it is a better test of not only the content itself but the placement of the asset in the buyer’s journey. If they have consumed the content and they still don’t want to give you their personal information, they’re not ready to become your client. They need more information to get to the point of conversion. In that case, it’s better not to waste the time of your sales force in contacting them.
This is what testing conversion rates is all about: where and when to present your best content to the audience so that they willingly accept the terms and conditions of a client relationship. If an asset is getting a lot of clicks but the form at the end is typically left blank, take the form away and simply give users a way to dive deeper into other assets, which in turn could have registration forms at the end.
Understand that merely counting conversions might not make sense, especially with multistop buying cycles that take time to unfold. The best thing that content in the first step of your process can do is to get the visitor to the next step, so it makes no sense to test top-of-funnel content against conversions. Treating the transition from step 1 to step 2 as a “microconversion” gives you a better way to test the content from step 1.
If you don’t get conversions, you should at least tag and track the user so that you can remarket to her later. Tagging and tracking technology places a cookie on the user’s device, which is like a digital fingerprint that cues into the user’s behavior without gathering any personally identifiable information. When the same user returns, you can offer her another asset that is the next logical step in her journey, based on the data you have collected about typical client journeys.
Another way to optimize conversion is to test different assets. For example, videos work particularly well in the early steps of client journeys because they can be coded with annotations to drive clicks to the next logical step. If you use videos later in the journey (perhaps as how-to demos), you can offer an annotation at the end that loads a registration form or a purchase experience. There’s no end to the variables you can test in order to improve your conversion experiences.
Low Advocacy Rates
Advocacy is difficult, but not impossible, to measure. We recommend using social sharing as a proxy for advocacy. If Emily shares your content in her social circles, that’s an implicit endorsement of your content and the brand that created it. Measuring social sharing is at least a critical first step in assessing the degree to which your content is generating advocacy.
When measuring the advocacy rate of your content, don’t confuse content-fueled advocacy with general brand advocacy. Some companies use social listening to determine whether clients are advocating their brand; they use brand mentions with positive sentiment as a measure of advocacy. This is a perfectly valid thing to measure, but the problem is that it usually does not tie back to any particular content asset, so it is not a helpful measurement for identifying poor UX. In addition, there’s no way to prove that any changes you made in your content directly affected the brand advocacy metric gathered through social listening.
Another mistake we often see with measuring content advocacy rates is measuring only the sharing done through the Twitter or Facebook sharing buttons you place on your page. While we strongly advocate the use of such buttons, we know that the great majority of social happens through “dark social” users sharing by copying and pasting URLs into emails and tweets rather than clicking the nice buttons you gave them.
Whether or not Emily uses your sharing buttons, you can test all kinds of things about the assets she shares: the placement of the assets on the page, the pages from which the assets are served, the calls to action for the assets themselves, etc. Even if you have the best assets in the world, nobody will share them if they fail to scroll down the page to find them, for example. In these ways, UX is a crucial aspect of social interaction, including sharing.
So, by identifying assets with high bounce rates, or low rates of engagement, conversion, and advocacy, you’ll put together your first draft of the most likely culprits in compromising your UX. Obviously, the pages with the most views combined with these identifying metrics are the ones to target first. Once you know where the problems are, it’s time to fix them.
When measuring the advocacy rate of your content, don’t confuse content-fueled advocacy with general brand advocacy. Some companies use social listening to determine whether clients are advocating their brand; they use brand mentions with positive sentiment as a measure of advocacy. This is a perfectly valid thing to measure, but the problem is that it usually does not tie back to any particular content asset, so it is not a helpful measurement for identifying poor UX. In addition, there’s no way to prove that any changes you made in your content directly affected the brand advocacy metric gathered through social listening.
Another mistake we often see with measuring content advocacy rates is measuring only the sharing done through the Twitter or Facebook sharing buttons you place on your page. While we strongly advocate the use of such buttons, we know that the great majority of social happens through “dark social” users sharing by copying and pasting URLs into emails and tweets rather than clicking the nice buttons you gave them.
Whether or not Emily uses your sharing buttons, you can test all kinds of things about the assets she shares: the placement of the assets on the page, the pages from which the assets are served, the calls to action for the assets themselves, etc. Even if you have the best assets in the world, nobody will share them if they fail to scroll down the page to find them, for example. In these ways, UX is a crucial aspect of social interaction, including sharing.
So, by identifying assets with high bounce rates, or low rates of engagement, conversion, and advocacy, you’ll put together your first draft of the most likely culprits in compromising your UX. Obviously, the pages with the most views combined with these identifying metrics are the ones to target first. Once you know where the problems are, it’s time to fix them.
Fixing the Problems
It might seem daunting to try to fix so many problems, but the beauty of using content marketing analytics to identify the problems is that those same analytics give you clues about what is wrong and they help you test what you’ve done to prove the improvement.
But how do you start this kind of painstaking process in the first place? In our experience, when you answer that question, you’ve reached your personal nflection point between success and failure in outside-in marketing. Depending on where you work, you might see it go many different ways:
You take over a small set of pages on your site that you can change over and over again without asking permission.
You make friends with the analytics manager, who feeds you proof of all the problem areas of the site that you take to management to get funding for fixes.
You convince an executive to let you do a pilot aligned with better UX practices.
In each case, the next step is the critical one. You keep working on that problem until you begin to improve the results. As we’ll see later, that success allows you to make the case to scale the solution to a larger swath of the website.
But how do you start this kind of painstaking process in the first place? In our experience, when you answer that question, you’ve reached your personal nflection point between success and failure in outside-in marketing. Depending on where you work, you might see it go many different ways:
You take over a small set of pages on your site that you can change over and over again without asking permission.
You make friends with the analytics manager, who feeds you proof of all the problem areas of the site that you take to management to get funding for fixes.
You convince an executive to let you do a pilot aligned with better UX practices.
In each case, the next step is the critical one. You keep working on that problem until you begin to improve the results. As we’ll see later, that success allows you to make the case to scale the solution to a larger swath of the website.
COMMENTS