Thoughts on gathering feedback

As a follow up to an earlier piece on feedback, I reached out to Bridget — a friend and HR executive — to get her professional take on how to best approach the gathering of feedback.

These were my takeaways:

  • It would be helpful to have an actual conversation…maybe not with each and everyone, but a few.
  • Send an introductory email along with your survey, something like the message I sent ahead of this conversation. (e.g. “Between client work and not having an annual review, I realize I really need to build in some feedback loops for myself. And I’m looking for your input.”)
  • Anonymity is important.
  • People will gravitate towards the positive if they know that you know it’s them. That’s been my experience.
  • If it is a service or a product you could setup follow up conversations, but where personalities are concerned… (when asked if I could give the respondent the opportunity to opt into a post-survey live conversation)
  • You probably want to get feedback on behavioral stuff, which can be hard to swallow
  • I always ask “Would you recommend this person to a friend or colleague” in the context of, say, fast-forward 10-15 years, someone contacts you and says, “Hi, I’m going to be working with Donald, what do you think of him?”
  • What am I doing well?
  • What should I be focusing on improving?
  • What do you wish I could do better?
  • Start with the positives. If they can say the positives, they are more likely to be able to say the negatives afterwards.
  • You’re doing this on your own, for your own benefit, you’re actively soliciting feedback, so people are going to be honest with you.
  • We ask the same question a couple different ways.

She is going to dig around for some 360-review type questions to help so me a potential framework for the survey, for which I am grateful. This conversation alone gave me a much better sense for the things that matter most as I build the survey.

One lingering question was really cleared up for me: who am I sending this to? The most sensible thing seems to be separating the behavioral questions from the project-based questions. I might give the behavioral questions to a select handful of people, while the broader project-based work questions would go to everyone I interact with, across departments.

I will also plan to sit down with my supervisor afterwards, once I have all the survey input, and have a chance to review and reflect on what I have read. He and I can discuss his feedback and I can leverage his perspective when it comes to better understanding the overall feedback I receive.

On another note, I have really enjoyed using Typeform to publish my surveys. It has a conversational flow to how the questions are served up. But recently, Survey Monkey has had a major facelift to its services and for this and other upcoming surveys I am going to give it a try.

Next step is to build the survey. I am going to send it to Bridget, too, to get her opinions before I launch it to a few colleagues.

Asking for feedback

Over the past two years, I have had clients, professional relationships, on-going projects, and as I’ve sought out articles on feedback, it came to my attention that I have not ever properly sought out feedback in a meaningful way.

I have received feedback; for sure. Often too late, when in one instance I left a project get away from me and the client was rightfully…irked. They were incredibly professional and gracious, but nevertheless, I knew I had done them a disservice. Perhaps a more regular feedback loop would have increased my chances of being able to course-correct.

I have only ever gone through a 360-feeback review in a college leadership program. Annual reviews have been the only formal review process I have done. If I own my own business, what mechanisms do I put in place to ensure I am listening as well as I should be and incorporating feedback into my practice?

The annual review process is insufficient, happening too infrequently to have a meaningful impact and encouraging of folks to hold their thoughts until that one time a year. Seth Godin talks about a soft skills inventory and questions to ask of yourself:

If you choose to, though, you can do your own review. Weekly or monthly, you can sit down with yourself (or, more powerfully, with a small circle of peers) and review how you’re shifting your posture to make more of an impact.

This led me to think: should I solicit feedback and conduct my own review, of sorts?

  • Which questions should I ask?
  • Who should I ask? (e.g. peers, supervisors, clients)
  • Does it matter whether or not I ask them to attach their names to the feedback?
  • What is it that I want to know? That is to say, I can derive questions if only I knew the information I am looking for?
  • In which case, what metrics matter for my long-term success?
  • Is written feedback sufficient or do I need to incorporate some live in-person component to this?
  • Does feedback need to be anonymous to allow for reviewers to be honest or does that cheapen the feedback for them and for me?
  • How much is the feedback about me versus the work (outcomes) or the process (why and how)?

One HBR article from 2011 cites a SKS process: stop, keep, start:

  • What should I stop doing?
  • What should I keep doing?
  • What should I start doing?

Effective feedback requires specificity. Are these questions too wide-open that they would solicit the sentiments and the specifics to back them up?

This article emphasizes the

Merits of a subscription model

Recently, my sister-in-law picked up an iPad that she would be able to do her work when not in front of her iMac. A week later, my Brother called me bent out of shape because he could not edit a Word document from it. After I explained that Microsoft Office requires an Office 365 subscription, he was bullshit. “What’s the point of having an iPad?” He exclaimed. I provided him another option: use Pages. He wasn’t satisfied with that either. I was unsure the real issue — does he think software should come at no cost? — but I was reminded me of the rise of the subscription model and how little people understand it.

There is a disconnect when it comes to cost. Price, I should say, price is actually the right term here. People seem price sensitive when it comes to how much they pay for something. In the case of software, there is a hold out amongst most where they want to own software. The disconnect comes from a misunderstanding of how the current business model works. For some, the disconnect comes from not valuing what costs developers face, especially in their time. (That goes for any small business owner or professional. Oddly enough, I find a faction of people are both simultaneously unsatisfied with low salaries/wages and obsessed with low prices. How can both of these things be reconciled?)

Back in the day, when you bought a software program, it came on a physical medium — remember those prompts to “insert disk 17”? You paid for this once, owned the program, and basically lived with it until such time the next version came out or a newest version arrived which had a feature set you actually cared about.

Expectations are different now. Now, we expect software with free and continuous upgrades. Gone are the days when “updates” and “upgrades” were more in sync, aside from perhaps a catastrophic software patch. Buy a program — an app — and ne’er a month goes by when there isn’t an update notification awaiting. In the last few years, we have witnessed the rise of the subscription model. I think the moment I knew we had passed the point of no return was Microsoft’s announcement for Office 365.

I am a huge proponent of Ulysses — I am writing this using Ulysses and a 12.7” iPad Pro as a matter of fact — and earlier this year they sprang a subscription on its users just as many others have done. You too, Ulysses? Is this the end of buying and owning software? How does this development cycle work? What are the consequences to developers and to users?

Accompanying their announcement, Soleman — developers of Ulysses — made a post to their blog detailing the realities of their business, which I found both compelling and telling of the industry as a whole. Well-articulated, they really got to the crux of the problem. I was so pleased with their efforts to bring me along their journey, and not just forcing “the inevitable” upon me, that I decided to sign-up right then and there. (Not to mention that I wanted to continue using their product.)

Interestingly enough, the way we pay for software hasn’t caught up to that rather drastic change in development yet. We still pay for the product at the time of its release, meaning we’re still paying for its past development cost. However, we now expect the product to magically evolve over time, via downloadable updates, without a need to constantly pay for new versions.

For some reason, this model has gained a popular label which can only be seen as a major fallacy: Paid upfront. No, it isn’t. It never was. We still only pay for the version at time of release; apps don’t spring into existence, after all. If anything, this model is “pay once”.

This definitely puts it all into context for me. In a bigger way, it resonates with me too. As a small business owner, I know — or feel as though — people are price sensitive. Often the first thing people as about is price and not value, which is an interesting behavior I hope to explore more.

So why bother at all then? Well, we need a good way forward before we run into trouble. We want to make sure the app will be around for years and years to come. We want to heavily invest in its development, and this requires the right setting for our team, our families and our users. Writers want to rely on a professional tool that is constantly evolving, and we want to keep delivering just that.

A Custom Excel Function To Calculate RF(M) Scores For Fundraisers

To accompany a guest post with Gravyty, I want to share an Excel function we wrote to take your entire constituency and generate a Recency-Frequency score for each individual.

This function is like other built-in functions you may have used in Excel, except that we’ve programmed it to do exactly what we want: determine the Recency-Frequency score for each individual donor record.

Recency-Frequency Score = Recency Score + Frequency Score

  • Recency-score function requires the last gift date
  • Frequency-score function requires a person’s year of graduation and how many years they have contributed

Then, just add these two numbers together.

It might sound complicated, but once you get through it it’s fairly simple. Once you get the hang of it, you’ll be able to run these any time you wish. For more information, check out these links from Microsoft’s Office support pages:

  • Create formulas in Excel
  • Making calculations using functions in Excel
  • Create custom functions in Excel

I am happy to help where I can. Whether you need other custom functions or better understanding how these may benefit your organization, let me know.

Excel Function: recencyScore

Public Function recencyScore(r As Range) As Integer

    Dim lastdonate As Date

    Dim donatefiscal As Integer

    Dim currentfiscal As Integer

    Dim d As Integer

    lastdonate = CDate(r.Value)

    ‘ we assume fiscal years run from July to June. If that changes,

    ‘ this next line will need to be modified

    donatefiscal = IIf(Month(lastdonate) <= 6, Year(lastdonate) – 1, Year(lastdonate))

    currentfiscal = IIf(Month(Now()) <= 6, Year(Now()) – 1, Year(Now()))

    d = currentfiscal – donatefiscal

    ‘Has never made a gift

    If Not IsDate(r.Value) Then

        recencyScore = 0

    ‘Has made an FY15 gift

    ElseIf d = 0 Then

        recencyScore = 20

    ‘Gave last year

    ElseIf d = 1 Then

        recencyScore = 20

    ‘Gave 2 year ago

    ElseIf d = 2 Then

        recencyScore = 15

    ‘Gave 3 years ago

    ElseIf d = 3 Then

        recencyScore = 10

    ‘Gave 4 years ago

    ElseIf d = 4 Then

        recencyScore = 5

    ‘Gave 5 years ago

    ElseIf d = 5 Then

        recencyScore = 2

    ‘Gave more than 5 years ago

    ElseIf d > 5 Then

        recencyScore = 1

    ‘Error score


        recencyScore = -1

    End If

End Function

Excel Function: freqScore

Public Function freqScore(giftCount As Integer, classYear As Integer) As Integer

    Dim freqP As Double

    ‘Calculate the frequency ratio

    ‘# years a gift has been made divided by the total # years possible

    freqP = giftCount / (2014 – classYear)

    ‘Check to see if they have ever made a gift

    ‘If no gifts, then it’s zero

    ‘otherwise, continue

    If giftCount = 0 Then

        freqScore = 0

    ElseIf freqP >= 1 Then

        freqScore = 30

    ElseIf freqP >= 0.9 And freqP < 1 Then

        freqScore = 24

    ElseIf freqP >= 0.8 And freqP < 0.9 Then

        freqScore = 18

    ElseIf freqP >= 0.7 And freqP < 0.8 Then

        freqScore = 12

    ElseIf freqP >= 0.6 And freqP < 0.7 Then

        freqScore = 6

    ElseIf freqP < 0.6 Then

        freqScore = 3

    ‘Error score


        freqScore = -1

    End If

End Function



What You Measure Is What You Get

If you are looking to improve year-over-year, it’s a good idea to know what that improvement looks like.

Typically, we go straight for total dollars raised or aggregate participation rate. But in most situations we aren’t expecting huge gains, instead we are looking for many smaller wins to yield that 2-3% increase to keep us ahead of last year’s trend line.

Donor acquisition is costly. Looking at increased participation to drive your increase in dollars misses a larger opportunity.

Focusing on recapturing donors is nothing new, but what metrics are you looking at to track this?

Lastly, fundraising efforts rarely rely on a single contributor to be successful. Fundraising is a team sport. But in meeting I’ve been privy to, there is a disproportionate amount of time spent taking about aggregate metrics as opposed to incremental program-based metrics where individual contributors can — and feel as though they — make a difference.

A Tiny “Primer” On Basic Stats

You might know your average gift size from last year, but are you actively tracking this year? More importantly, why are you asking for the average size? Averaging can quickly become irrelevant when you look at your giving data. What does it mean to know that your average gift is $587? (Not much in my estimation.)

Conversely, by looking at the median — or the 50th-percentile — you get a better way to gauge your performance. A median gift of $100 tells a more meaningful story. It indicates that half of your donors are contributing at or below $100. And you can bet you have many people who are giving this amount. (But don’t take my word for it, sort and scroll through your raw data or a frequency table.)

And if by “average” you are really wondering what the most common gift is, look no further than the mode.

Another important distinction — mean vs. median — is that often the mean value is not an actual gift size, but you can be sure the median value is. Rather, the mean is simply showing the central tendency of your giving data.

Data For The Sake of Data

The mistake most people make when it comes to data is misunderstanding what it promises. Data is not a means to an end, rather, it is instrumental in answering strategic questions.

Grasping the different applications of these basic stats will enable you to better articulate your questions and take advantage of your data to answer those questions. You’ll be closer to making data-driven decisions.

Armed with this knowledge, you can ask yourself:

  • What are my objectives this year? What metrics do I need to track?
  • Are my current tactics focused on the right areas?
  • Am I measuring our results in a meaningful way?
  • Are my efforts having the affect I intended?
  • What results do I need in order to declare an effort successful?

What happens to your fund if you raise your median gift size from $100 to $125?

What would it take to do this?

Hint: Stronger data-driven segmentation, dynamic gift ask amounts, stewardship tied to the right levels.


This post originally appeared on