This research wasn't based on random sampling, the gold standard in the survey world, so we want to be 100% clear about our process — how it was crafted, fielded, and who responded.
We started by reviewing our learnings from last year's State of the Reader survey — both on a conceptual and a meta level. What did we wish we'd asked differently, or asked at all? We thought carefully about how we could build on and expand what we'd learned, while guarding against confirmation bias.
We then drafted the survey and piloted it with about ten readers. We wanted to be sure the questions made sense, the flow ran smoothly, and it didn't take too long. We made adjustments based on their feedback.
Once we were ready to launch, we reached out to some friends for help promoting — a special thank-you to PangoBooks, Sara (@fictionmatters), Tessa (@thelithomebody), Hunter (@shelfbyshelf), and Brittany (@brittanysbookclub)!
Of course, we also promoted it ourselves. We emailed our Italic Type user base inviting them to participate, and also shared the survey on our own social channels.
We also invited everyone who took the survey to send it to other readers — with the incentive of an additional entry into our Bookshop.org gift card raffle for every referral.
This approach, known as convenience sampling, means the data we obtained is representative of the respondents we heard from — in other words, we don't have the ability to generalize to the broader population. (That's not to say that our findings are definitely not representative — we just can't assume they are.)
We launched in mid-January. In two weeks, we received 803 complete and unique responses — more than three times as many
as last year!