As a manager in a previous, more transactional, sales environment, I saw a lot of focus on inputs and conversion rates.
This is something I see in MEDDPICC driven SaaS orgs. However, a bad implementation of that focuses too much on lag indicators (pipeline produced & ACV achieved). It uses MEDDPICC qualification as a tool to inspect and beat up reps for gaps in absolute knowledge, rather than using it to drive coaching, better behaviours etc.
My question is - reviewing and working on inputs (first meetings, technical wins, PoCs won) and then working through coaching to improve outcomes seems like a solid approach. Are you advocating doing something else or perhaps considering inputs/ outcomes differently?
Firstly sorry to hear about your experience of qualification tooling being used to beat up reps, that sucks and has no longevity. Sadly I hear it a lot.
Knowing what to coach and when is key and weekly reporting provides the AE a platform upon which to talk through challenges and make asks.
If the performance to goal and forecast is behind the curve, I expect the AE to call this out proactively in the weekly update. It can even dominate the report. eg: all flags can relate to what's working in those lagging indicators and what's not with opinion on why and ideas for solutions.
I'd expect them to dive into the lagging indicators I've ensured RevOps have set up for us, and raise where they are struggling. Their perspective is key and I want to listen to it.
The 1:1 then becomes a preliminary coaching session on how to improve, but one of the asks in this situation could be more training, more support etc.
If they don't call it out themselves, that's a red flag. I'll see the data so I'll know, but it's important they own it.
This is such a great response - thank you!! And it makes total sense of course.
Iโm assuming that this works very well for tenured/ senior AEs. Outcomes/ success metrics for โkey account sellingโ can be set just as easily as for new business development so just use the metrics suitable for the role but work through the same reporting approach?
Similarly, one could coach less experienced AEs on โwhat good looks likeโ and how to interpret the data a little more?
The inclination is to assume this works best with experienced enterprise AEs, but I've applied it with SDRs and Customer Support - works a treat, only if you make "ownership mindset" a key characteristic in your hiring process. You've got to get the hiring right - a whole other topic.
I like this format and approach to 1:1s
As a manager in a previous, more transactional, sales environment, I saw a lot of focus on inputs and conversion rates.
This is something I see in MEDDPICC driven SaaS orgs. However, a bad implementation of that focuses too much on lag indicators (pipeline produced & ACV achieved). It uses MEDDPICC qualification as a tool to inspect and beat up reps for gaps in absolute knowledge, rather than using it to drive coaching, better behaviours etc.
My question is - reviewing and working on inputs (first meetings, technical wins, PoCs won) and then working through coaching to improve outcomes seems like a solid approach. Are you advocating doing something else or perhaps considering inputs/ outcomes differently?
Great question Andy.
Firstly sorry to hear about your experience of qualification tooling being used to beat up reps, that sucks and has no longevity. Sadly I hear it a lot.
Knowing what to coach and when is key and weekly reporting provides the AE a platform upon which to talk through challenges and make asks.
If the performance to goal and forecast is behind the curve, I expect the AE to call this out proactively in the weekly update. It can even dominate the report. eg: all flags can relate to what's working in those lagging indicators and what's not with opinion on why and ideas for solutions.
I'd expect them to dive into the lagging indicators I've ensured RevOps have set up for us, and raise where they are struggling. Their perspective is key and I want to listen to it.
The 1:1 then becomes a preliminary coaching session on how to improve, but one of the asks in this situation could be more training, more support etc.
If they don't call it out themselves, that's a red flag. I'll see the data so I'll know, but it's important they own it.
This is such a great response - thank you!! And it makes total sense of course.
Iโm assuming that this works very well for tenured/ senior AEs. Outcomes/ success metrics for โkey account sellingโ can be set just as easily as for new business development so just use the metrics suitable for the role but work through the same reporting approach?
Similarly, one could coach less experienced AEs on โwhat good looks likeโ and how to interpret the data a little more?
The inclination is to assume this works best with experienced enterprise AEs, but I've applied it with SDRs and Customer Support - works a treat, only if you make "ownership mindset" a key characteristic in your hiring process. You've got to get the hiring right - a whole other topic.