If you want to increase revenue from your email campaigns, one simple concept you can begin to test is to “sell the click, not the donation.” For example, in this control version of the email, they have this wonderful story. They tell the entire story in the email and so the recipient says, “I’ve got all the information I need to make my decision. Do I want to click, or do I not?”
For example, take a look at the below email examples. In the control version of the email, they have this wonderful story. They tell the entire story in the email and so the recipient says, “I’ve got all the information I need to make my decision. Do I want to click, or do I not?”
It’s all in the email in one single step. If the story is good enough, is compelling enough, and motivates the recipient enough to give a gift, then they might click.
We wanted to see if changing how we approach motivation would increase revenue.
In the treatment version we just shortened that email up. We said, “Click here to listen to the story online.” We don’t give away the story in the email. Instead, we tease the story and actually use it to sell the click in the email not (yet) the donation.
In this case, we had a 257 percent increase in clickthrough.
Here’s another example. This is the first in the series of several end-of-year emails for the Heritage Foundation as part of their end-of-year campaign. In the control version of the email it had our typical call-to-action language, “Donate now. Give a gift now. Click here to give.”
We suggested, “It’s the goal of the email to sell the click, not the donation.” Maybe we can get more people to the landing page by selling only the click.
On our landing page we have this amazing video. So we said, “Let’s just tease the video in the email and tell people to click through to watch the video. After watching the video, we can continue the story and transition into an ask where people can give on the same page that inspired the gift.”
We set up our AB split tests. Half the file got version A (the control), half the file got version B (the treatment). The people that received version B had a 369 percent higher clickthrough rate, which produced 121 percent increase in revenue from donations.
Did we have a lower conversion rate on version B clickers? Yes, we did. But we had such a higher amount of traffic to the website that the drop in conversion was surpassed by the positive impact of the volume of traffic that we had.
Let’s look at one other example of “selling the click.”
We were working with the Colson Center. They would send this very, very, very long email. I’m not going to tell you that long emails are banned, but we simply said, “Let’s just experiment.” We’re trying to sell the click and then continue the story on the landing page from the shorter email and see if more traffic to the website would result in more donations like we saw in the previous examples.
We set up an AB split test. Version A got the long-form email that gave them all the information right in the email.
Version B got just a tease and when they clicked through, the landing page continued the story with the same content in the long-form email but it transitioned right into the ask where visitors could donate.
We ran the experiment and version B produced 1,209 percent increase in clickthrough. However, we also noticed when we analyzed the data we had a 3.57 percent drop in donations. We said, “Uh-oh.”
When we look at donations, we realized that we did not reach a statistical validity on the decrease in donations and we didn’t have enough samples that came through and converted on either of these. It wasn’t a big enough difference in order to validate the sample set, so we needed to keep testing.
This is one lesson that we’re going to come back to, which is the importance of validating your test results to a 95 percent minimum level of confidence. What that means is 95 times out of 100 you’re going to get that same result. You want to have that level of confidence before you roll out that treatment more widely.
We did a very similar test the next month (control and treatment images, above), because we did not validate the first test. This month we had a 117 percent increase in clickthrough from the shorter form email, but this time we saw donations dropped by 30.7 percent.
3 Critical Lessons on click-through rate from this experiment
We never like to see a red arrow (indicating a decrease). However, we learned some tremendously precious lessons through this experiment. Here they are:
- The first lesson is this. Always, always, always validate your data. If you don’t know how to do that, we have a tool on our website. It’s a tool you can use to validate your test results. When you’re validating, make sure you’re validating for the ultimate conversion goal. If we had been fixated on traffic in this experiment and said, “Oh gosh. We got a higher clickthrough rate—this is the winner” we would have been in trouble. If we didn’t look at the donations coming through the other side, we would have been shooting ourselves in the foot. You absolutely must validate your data through the entire FCORM equation. Make sure you’re validating for the ultimate goal: more revenue.
- For some organizations, the messenger trumps the message. Let me go back to the 2 emails in this experiment. In review, we made some observations. We wondered how it was possible that this long form email, even though it’s sending not nearly as much traffic, was still having such a higher conversion rate that it trumped all the increase in traffic that we got from the short form treatment? If you notice the creative, it’s designed to look like a letter and the letter is signed by somebody. It’s signed by Chuck Colson, founder of the Colson Center. What we deduced is that when Chuck Colson tells people to do something, people are much more motivated to do it. Even though we don’t get as much volume from the long-form email (because a lot of people don’t want to read all the way to the end), because Chuck tells them to do so they’re more motivated when they come to the website and we have a much, much higher conversion rate. The impact of the sender’s authority is very interesting and that’s something that you can test with your own organization as well.
- The learnings produced by testing are often more valuable than the lift. We get excited about seeing big green arrows that go up. However, the first goal of every experiment is to produce a learning. This requires that you craft a very, very specific and clear hypothesis before running your experiments. Afterward, we can revisit our hypothesis and say, “What did we learn?” We write out the results against our hypothesis. If it was wrong, then that also teaches us something and we can apply that insight to future campaigns. Not only did we apply the lesson learned from this campaign across the board at the Colson Center, but we’ve been able to take the same lesson and apply it to many other campaigns and even other organizations. One experiment produced this just absolutely exponential effect. Remember, the learnings are more valuable than lifts.
This idea of selling the click versus selling the donation—and understanding on the other end if you are you getting more donations out of your funnel or not—is an interesting test you can run in your own campaigns.