Bad Bros and Bad Blood: a Summer Vacation Book Report

Hey kids: I know you hate all that summer reading you get assigned at the end of every school year. But when you grow up, you’ll discover that reading books is actually the best part of your summer vacation.

Here’s what I read on my summer vacation.

Brotopia — by Emily Chang

I already knew most of the stories — Chris Sacca’s weird hot tub meetings. Susan Fowler’s horrible experience working at Uber. The creepy abuse of power from investors like Dave McClure and Justin Caldbeck. James Damore’s manifesto. Taken individually, they are a series of really, really bad decisions made by men. Assembled together by Emily Chang, it paints a clear picture of the signifcant obstacles faced by women and minorities in tech. From minor daily annoyances to outright harassment and criminal behavior, women in tech have had to put up with a lot of unnecessary shit. The irony pointed out by Chang is that the numbers show companies who hire more women in leadership positions perform better.

One thing that really stuck out for me in Brotopia is the idea of hiring for cultural fit or — hiring people you’d want to get a beer with after work. In the real world, this translates to “hire someone exactly like you”. Chang proposes explicitly hiring for “cultural addition” i.e. people who can bring diverse experiences and perspectives. That makes a lot of sense to me.

While brainstorming writing this post, I wanted to think bro culture is less pervasive in the Boston tech ecosystem than in the Bay Area. But just last week, the MassTLC technology association announced the nominees for it’s annual leadership awards. There were 15 nominees in the category of CEO, CMO, and CTO of the year. Zero women.

(By the way MassTLC I’ll help you out: Carol Myers of Rapid7 should be the CMO of the year, every year).

Lost and Founder — by Rand Fishkin

In Lost and Founder, Rand shares the story of Moz and what it’s really like to build and scale a startup. I really enjoyed the book and the “cheat codes” he provides based on the successes and failures he experienced building Moz to over $45m in revenue.

Spoiler alert: it’s not always unicorns and rainbows.

The prevailing narrative is that every tech company is a hypergrowth rocketship that’s going to triple triple double double double their way to unicorn status. Yeah, on occasion this happens i.e. Box, Slack, Zendesk.

It’s happening at Drift right now.

But at vast majority of startups look more like Moz. Moz is a clearly a successful company. They’ve built a great product that users love. At $45m, they have significant scale. They are even profitable! But Rand admits he led the company through a series of bad decisions over the years, including:

  • Launching a slew of new of product initatives that failed.
  • Prioritizing investor feedback over customer feedback.
  • Failed “growth hacks” and that brought in a bunch of new users… who all subsequently churned and distracted the company.
  • Turning down a $25m acquisition offer from HubSpot. Note, that at HubSpot’s current stock price, Rand would have been worth at least a bazallion dollars by now had he taken the deal.

In the end, Rand left Moz — and it seemingly it wasn’t his decision. He seems like a solid guy, and I’m sure he’ll be a better entrepreneur after going through this experience. Thanks to Rand Fishkin for sharing these candid stories.

Bad Blood — by John Carreyrou

Okay, so this was the book I was most interested in reading on vacation. Once I started, I couldn’t put it down.

Bad Blood is the story of Elizabeth Holmes and Theranos, the company she founded after dropping out of Stanford. If you’ve been paying attention, you know this story did not end well. Theranos is now all but over, and Elizabeth Holmes is currently facing a number of criminal charges.

Elizabeth Holmes was a brilliant narcissist with a vision that one day, a single prick of the finger would bring blood testing into every household. Holmes and her arrogant COO boyfriend Sunny Balwani created an extraordinary illusion of success, faking out everyone from Walgreens to investors who gave her $400m a $9b valuation. She created a Jobs-ian reality distortion field.

All without a product.

Well, Theranos did have a couple of products. Their first product was a device that could run a few blood tests, but it often didn’t work, and it couldn’t completely replace everything you’d get done in a real lab. So Theranos ended up creating their own lab to analyze blood using tried-and-true Siemens blood testing equipment. The Theranos innovation was modifying the Siemans equioment it to handle the significantly reduced blood volume captured by their pinprick device.

Of course the Theranos modification introduced a really big problem: the results weren’t accurate. Walgreens rolled out Theranos to stores in Arizona, and delivered tens of thousands of inaccurate labs results to customers. It took brilliant investigative journalism from Carreyrou to unravel the the insane amount of investor fraud and criminal behavior happening at Theranos.

My wife has spent her career in biotech working with the FDA on new drug applications. It often takes decades for companies to commercialize medical research to the point where it can be put into the hands of customers. This is real life and death stuff, but Theranos treated it like an a form of a beta SaaS application. They purposely avoided both the FDA and the Centers for Medicare & Medicaid Services, both of whom are charted to protect consumers from evil companies like Theranos.

To Holmes and Balwani, the illusion of success and the paper wealth it created was more important than the truth. Bad Blood highlights everything wrong with the current “win at all costs” mentality that permeates the Valley.

I experienced a similar situation after joining Autonomy in 2008. At the time, Autonomy was a tech powerhouse, well on its way to a billion in revenue. Autonomy sold to HP in 2011 for a massive $11b valuation. But less than a year later, HP had to write off a huge chunk of the aquisition over “accounting improprieties” and last month Autonomy’s former CFO Sushovan Hussain was convicted on a series of felony charges related to the acqusition. Like Theranos, Autonomy formed a distorted reality where low margin hardware could be sold as high margin software, and where deals sold to resellers never had to make it to the end customer to be booked as revenue.

Autonomy employees knew that something was way off, but the power of the distortion field was strong. I suspect Theranos employees look back and see many of the same warning signs I saw.

In the end, I feel like the Theranos story should have ended much differently.

Look, Holmes isn’t exactly a sympathetic character having willingly put lives at stake by delivering inaccurate labs results. But she had a big vision, and was making good progress towards it — albeit more slowly than apparently she was willing to accept. But had she played by the same rules as, you know, every other healthcare startup, Theranos would be on a much different path.

Shoe Dog — by Phil Knight

I saved the most inspiring and grounded story for last.

Shoe Dog is the story of the early days at Nike. It’s the anti-Valley success story. There were no insane growth hacks or unicorn funding rounds. Just the perseverance and grit of the “shoe dog” — Nike founder and Phil Knight.

Nike started out as Blue Ribbon with a $50 loan from his father. Blue Ribbon became the sole distributor of the Onitsuka (now ASICS) Tiger. Knight faced a series of obstacles, ranging from terminated bank loans to a lawsuit that could (maybe should?) have crushed Nike before it even got started.

Today, Nike sales top $30 billion and Nike’s swoosh is one of the most recognized logos on the planet. Knight himself is worth $33.7b.

Not bad for a guy who built his business initially as a passion project.

Go buy the book and read it. If you don’t like it, let me know and I’ll refund the cost 🙂


How to do A/B/N testing in a Pardot Engagement Stream

Want to do A/B testing in a Pardot Engagement Stream? You can’t 🙁 Well, not without a little bit of customization.

Here’s a simple workaround for doing A/B/N testing in a Pardot engagement stream using a Salesforce APEX trigger.

I won’t get into the details of building an APEX trigger, so if you aren’t comfortable with adding code to Salesforce, ask your admin nicely for help. But it’s really pretty easy to do. The process looks like this:

  1. Create a new Lead field in Salesforce. Mine is called abRandom.
  2. Create an APEX trigger with the code below and deploy it.
  3. Create a new custom Prospect field in Pardot and sync it with Salesforce
  4. Add a rule to your Engagement Stream to branch on the random number you’ve generated.

Here’s the APEX trigger code to generate a number between 1–4 so that I can have up to 4 variations inside an Engagement Stream:

trigger assignRandom on Lead (before insert, before update) {
    for (Lead lead : {
        if (lead.Testing__c == NULL) {
            lead.Testing__c = ( (Math.random() * (4-1) + 1));
            lead.Testing__c = lead.Testing__c.setscale(0);

Testing__C is the API name for a Salesforce field I created called abRandom (I probably should update that name, eh?).

This function will look to see if the abRandom field is empty and if it is, it will generate a random number for every Salesforce Lead. You can create a random number in any range by changing the values below i.e. if you wanted a number between 1 and 2 you’d change it to:

(Math.random() * (2-1) + 1)

Once the APEX trigger is deployed, all of your Leads will get be assigned a random number that will be synced to Pardot. Here’s what a Pardot Engagement Stream looks like with A|B branching based on the random number.

This example will split test two emails.

Pardot Engagement Stream w/ an A/B test

That’s it. I hope this helps! Maybe someday Pardot will add support for native A/B testing without workarounds like this 🙂


Debunking Five Analyst Relations Myths

Analyst relations is easily the most misunderstood function in marketing.

I’ve been involved with analyst relations — or AR — for over a decade, working on dozens of Gartner Magic Quadrants and Forrester Waves. I’ve experienced the impact that analyst relations, when done well, can have on growth. And I know how much time and effort it takes to do it right. It’s not witchcraft nor is it a simple “spend more / do better” formula.

It’s time to set the record straight, so in this post I’m going to debunk five of the most common myths I’ve come across. Well, turns out this ex-mathematician is not great at counting, so I’ll be dubunking a bonus 6th myth as well 🙂

Myth #1: Analyst firms like Gartner are “pay to play”

Okay, so let’s start with the big one. No, they aren’t. Let’s kill this myth once and for all.

No matter how much money you spend on things like research subscriptions, strategy days, webinars, or events — you can’t buy your way into analyst reports and rankings. Companies who complain about “pay to play” are just bad at analyst relations. There, I said it. I’m going to focus on Gartner to debunk this myth, but the same concept applies to all of the major analyst firms I’ve worked with.

Consider the story of NetScout, who once tried to sue Gartner using the “pay to play” myth. NetScout wasn’t happy with its placement in a Gartner Magic Quadrant(MQ) a few years ago, claiming that:

Gartner has a ‘pay-to-play’ business model that by its design rewards Gartner clients who spend substantial sums on its various services by ranking them favorably in its influential Magic Quadrant research reports (‘Magic Quadrant reports’) and punishes technology companies that choose not to spend substantial sums on Gartner services.”

NetScout argued that competitors who spend more with Gartner ranked higher, and that Gartner salespeople implied that spending more money would improve their position in the Magic Quadrant.

Here’s how the Magic Quadrant works in a nutshell: Gartner evaluates vendors using a proprietary methodology honed over decades of research. Analysts apply this research methodology to form conclusions about vendors and markets, in a peer reviewed process. Magic Quadrants categorize vendors into one of four quadrants based on Gartner’s assessment of them in two dimensions: Ability to Execute and Completeness of Vision.

In this case, Gartner identified NetScout as a Challenger, pointing out feature gaps in their product and negative customer feedback. NetScout, of course, thought it should be a Leader and sued Gartner. The case never went to trial and eventually the lawsuit was dismissed for the obvious reason that Gartner is protected by the first amendment and Netscout couldn’t prove any malicious intent. Gartner is paid very well by its customers for forming a strong opinion based on its methodology.

Gartner analysts could care less how much you spend with Gartner, and there are firewalls in place to make sure even the appearance of a conflict of interest are minimized. Could a Gartner salesperson have hinted at a connection between investment and MQ positioning? Of course. I’ve never experienced it, but even if it happened, an organization of NetScout size knows better.

Now, time to contradict myself. There is indeed one case where it does help to pay — you should really consider purchasing an subscription, sometimes called a seat.

Every analyst firm I’ve worked with will encourage you to provide regular briefings — whether or not you are a customer. These briefings are a monologue not a dialogue, but they are a free way for startups / category creators to get some mindshare without the cost of a subscription. The minute you have evidence of product-market fit you should be doing this at least once a quarter.

But if you are in a market with a Gartner Magic Quadrant and/or a Forrester Wave, or if there’s likely to be one, then you should consider purchasing a subscription. A subscription provides you with access to all the written research from the analysts, and more importantly it lets you schedule “inquries” with them to get direct 1–1 feedback.

The single best thing you can do to improve how analysts view your company is to help them map your company and products to their vision for a market. To do this, you first need to understand what’s important to each analyst you speak with — the language they use, trends they see, and specific product capabilities they deem important. Once you know this, you can frame your communication in a way that directly speaks to each analyst, using customer references as validation points (more on references later).

Any vendor in a big enough market to warrant a Magic Quadrant is likely able to afford a Gartner subscription. Yes, subscriptions are expensive, but so are many marketing investments, including paid acquisition and events. Doing well in an analyst report can be one of the best ROI investments you’ll make.

The TL;DR is this: there’s value in paying for an analyst subscription to gain more access to analysts. It’s not required, but probably a good idea for a lot of companies — just like investing in Adwords or events is probably a good idea for a lot of companies. But beyond a subscription, paying more to analyst firms doesn’t move the dot.

What does move the dot? Your customers. Which brings us to the next myth.

Myth #2: Your PowerPoint slides matter

Wrong, sorry. Analysts see right through hyperbole laden BS messaging, Nascar logo slides, and highly produced demos with more special effects than a Michael Bay movie.

Look, analysts are see thousands of PowerPoint slides a year and unless you are Steve Jobs, chances are your slides aren’t going to impress them. What analysts do care about is the strength of your customer references. This is basically all that matters.

Sure, regularly update analysts on your company, positioning, messaging, pricing with PowerPoint slides. All good foundational stuff. But analysts are forming their opinion on you through their interactions with customers, both the references you provide directly, and the daily interactions they have with their client base. Analysts are digging deeply in your company and product through the lens of the customer. How much value have they received? How difficult was the implementation? How’s your customer support?

If you really want to really improve your position in an analyst report, stop worrying about creating better slides and instead worry about creating better customers.

Myth #3: Gartner is the only analyst firm that matters

No way. There are lots of analyst great firms who offer tremendous value to both vendors and end-user customers.

Of course everyone knows the obvious names like Gartner, Forrester, and IDC. They cover lots of markets and buyer personas, and produce the popular vendor reports that get CEOs and boards exited ie. Magic Quadrants and Waves. Vendors spend most of their time analyst relations times with them, and I think that’s a mistake.

It’s often the more boutique analyst firms who offer more insight and value. These firms often have a narrower focus and are able to dig more deeply into a specific market. For example, there’s no one who knows digital experience better than Scott Liewehr from Digital Clarity Group. Scott personally travels hundreds of thousands of miles a year to speak with the companies and agencies who implement digital experience technology. When a CEO or CMO in this market really need to get to the bottom of something, the first call they make is to Scott.

Another example is David Menninger, who covers data and analytics for Ventana Research. Dave’s background as both an analyst and (recovering) product marketer at bunch of successful tech companies gives him a unique perspective that many analysts don’t have. As a former vendor he speaks my language and provides good insight on messaging, positioning, competitive dynamics, etc.

There are lots of other firms like Digital Clarity Group and Ventana Research who serve more specialized audiences. My recommendation is start first by researching the specific analysts who are closest to the customer in your market, not the just the analyst firm they work for. Otherwise, you’ll be missing out.

Myth #4: You can move the “dot” in a Gartner Magic Quadrant

This is one of my favorites! Sorry, but this never happens. Let’s start with a quick overview of the Magic Quadrant process.

A new Magic Quadrant kicks off with an email to all of the vendors Gartner thinks are candidates for inclusion. Vendors are provided a specific set of criteria, and then asked if they think they meet it. Gartner vendor feedback combined with their own knowledge to come up with a final list of vendors to evaluate. Each of the vendors is provided with a long set of requirements and asked to provide reference customers. Each vendor is then given an opportunity to present to the analysts for 60 minutes or so. This process takes 2–3 months start to finish. Gartner then compiles all of the findings, going through an exhaustive process over an additional 2–3 months or so.

And then that moment when you get the email from Gartner with “FACT CHECK” in the subject. Sit down. Breathe. Open the email, and you’ll see where all the dots landed!

Careers can be made and destroyed by this one email. Exceed expectations and you are a forever a hero to your CEO + board. Do poorly, and well sadly I’ve seen people fired. True story: I once fell out of my chair and ran around the office screaming when I saw that Acquia had become Leader in a Magic Quadrant. Pro tip: you can’t tell anyone about it your dot position during the fact check stage, so make sure you have a good story already worked up 😉

Regardless, no matter where the dot lands, no amount of arguing, pleading, or begging is going to move it at this point. Gartner makes this crystal clear in the fact check process, but it doesn’t stop companies from trying. Take an any analyst out to dinner, and you’ll likely hear stories about all the irate phone calls they get from CEOs who are furious over the dot. I’ve been told it’s badge of honor when analysts get these calls from the likes of Larry Ellison and Marc Benioff.

Look, Gartner doesn’t care that you just crushed Q2, or that you never lose the the competitor who is ranked well ahead of you, or that you just raised a $100m Series C from every top-tier valley VC. They only care about their methodology. I’ve worked on a couple dozen Magic Quadrants over the years and I’ve seen a vendor dot move exactly once during the fact review process. Even then it was by such a miniscule amount that you’d barely notice it, unless you obsess over things like this like I do.

The time to move the dot is well before the Magic Quadrant process started. If the result isn’t what you hoped for, take the feedback, swallow your pride, and get started for next year. Whatever you do, don’t be that company whose CEO makes the angry Friday 5pm call. It’s not going to work. Send them this post.

Myth #5: Just becoming a Leader in an analyst report will double/triple/10x your growth

Hopefully, but usually not even close. Being a Leader helps revenue growth for sure, but maybe not as much as you think. Even then, it takes a lot of work to make the growth happen.

Jeff Mann of Gartner once put this fake Magic Quadrant together as an April fools joke, but there’s indeed some truth to it.

Credit: Jeff Mann

The implication is that buyers prefer Leaders, ignore Niche, and are wary of everyone else. To prevent this, Gartner and other analyst firms have standard disclaimer language that encourages buyers to look more closely.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Of course buyers should look beyond Leaders, and they usually do. It’s highly unlikely that your requirements directly map to the ranking methodology used by analyst firms. Often buying from a Leader is the worst possible decision you can make.

But being a Leader in an analyst report will absolutely get you on more short lists, which itself can be good or bad if you aren’t prepared. For example, when Ektron became a Magic Quadrant Leader in 2012, it got us into a whole bunch of new opportunities against much bigger competitors like Adobe. It completely changed the dynamics of our sales funnel. Deals were bigger, but took longer to close and were much more competitive. It was overall a huge net positive, but success didn’t happen overnight, and we still had to work hard at it.

It’s great to be a Leader in a Gartner Magic Quadrant or Forrester Wave, and companies who make it in for the first time should be very proud. It’s a huge PR moment. Your CEO, board, and investors will love you. It’s a good morale builder internally. But make sure you set realistic expectations about the growth impact it will have, and make sure you are prepared for the changes that will happen to your business as a result of it.

Myth #6: Your PR firm can manage analyst relations

Probably not, it’s a different skillset.

I’m sure there are some PR firms who are better at this than others, but in my experience, treating analysts as just another influencer channel is dangerous. I recommend two things if analyst relations is really important to your company.

  1. An executive should own it directly and consistently. Too many companies delegate AR far down into the organization, especially as they get larger. In my experience, having one executive own AR over a period of many years builds the sort of trust it takes to influence how an analyst thinks about a market.
  2. Always involve founders or whoever is the actual thought leader at your company in as many analyst interactions as possible. Analysts want to hear from the most credible sources, which isn’t your AR team or product marketers. At Acquia, that meant using Dries Buytaert as often as possible when speaking with analysts.

If you do need help scaling your program, look to agencies who specialize in analyst relations like Spotlight AR. They understand the nuances it takes to run a successful AR program at scale. (Disclosure, I was once a customer of Spotlight).

Want to learn more?

To demystify the black arts of analyst relations, here are the best analyst relations professionals + resources I’ve come across. Who/what am I missing?

Beth Torrie. I competed against Beth for many years, and she’s the best. I hope to never compete against her again.

Rick Nash and Andrew Hsu at SpotlightAR. They’ve helped dozens tech companies with their AR programs.

Joely Urton of Box. I’ve never met Joely, but a good friend of mine worked for her, and said she’s great. That’s enough for me.

Institute of Industry Analyst Relations is a not-for-profit organisation established to raise awareness of analyst relations and the value of industry analysts.


What I Learned Building an App on the Drift Platform

A long time ago in a galaxy far, far away — I was a Computer Science major at the University of Illinois. I grew up programming in BASIC on a variety of hardware, including my beloved TI 99/4A and then later a Commodore Amiga. My parents came up with a brilliant strategy to get me to learn to program — you want to play games on the computer? Build them.

But after graduating college I realized I was pretty awful programmer. I was working in QA at a software company, and I found out pretty quickly that I better find something other way to make a living. Since I was better at talking about technology than building it, I ended up becoming a sales engineer and then later, a marketer.

I’ve always felt that my tech background has helped my marketing career, a phenominon Scott Brinker wrote about way back in 2008. Marketing as a discipline sits at the center of science and humanity, and I’ve tried to balance the two in my career.

But sometimes you just feel the need to build.

Building a App on the Drift Platform

The Drift platform is a new set of APIs to allow developers to create and publish apps on Drift. I wanted to find a way to make it easier for our team at RapidMiner to learn more about the people we’re having conversations with.

I’ve been wanting to get back into coding as a #sidehustle thing, so I took at look at at one of the sample applications that launched with the Drift platform and decided to give it a try.

Every time we start talking to someone with Drift, we try and look them up in Salesforce to learn about them so we can make the converation more relavant. Salesforce has all of the usual information, but more importantly for us, it has all of the information about how people are using our products.

Our process with Drift at RapidMiner was to cut+paste the email address into Salesforce to look up a user. Sounds easy, but it takes time, and disrupts the flow of the conversation especially when we are juggling lots of simultaneous conversions.

So, I decided I’d try and build an app called salesforce-lookup that would make it easier to pull data from Salesforce into Drift. It takes the email address of the current user and returns a bunch of useful information from Salesforce directly into the Drift conversation. It looks like this inside Drift:

The app will automatically generates a direct link to the user in Salesforce, and returns information about the user, including the number of times and the last time they used RapidMiner. Simple for sure, but it saves a bunch of time for the team and lets us have more relevant conversions.

Here’s how I did it, and what I learned in the process.

Creating a GitHub Account

Okay, I already had a GitHub account, but had never used it before. Wow, GitHub is simple and useful. I can’t compare it to the all the old ways to manage code because I never used them, but it sure beats vi and a filesystem.

Apparently people still do use vi (and emacs). And there’s even a Wikipedia on editor wars. (I’m team vi, for the record).

Setting up Heroku

Heroku is an obvious place to run nodeJS apps, so I gave it a try. It was super easy to setup, and connects right to GitHub. Every time I pushed my code to GitHub it would automatically redeploy the app.

Learning nodeJS and Asynchronous JavaScript

Okay, now the hard part — building the app in nodeJS.

I’ve used client side Javascript extensively during my time as a sales engineer, but I never really appreciated how much its has evolved for server-side applications.

My breakthrough was learning that Javascript can be asynchronous, and that means it requires a bit more planning when building the app.

I had to string a serious of function calls together to make sure that I had all the information I needed at each step. That meant I had to learn all about callback functions (shoutout to Joel on the Drift team for explaining this to me).

For example, here’s a function that finds the Drift Contact Id for the user in the current converation.

In an asynchronous Javascript world, you don’t really have the Drift Contact Id variable until you execute the callback function GetContactId. Once I figured this out, I was able to chain together a series of API calls to Drift and Salesforce.

The Drift APIs were easy to use, but Salesforce was a bit harder. One of the best parts of nodeJS is the ecosystem, and I found a robust library called JSforce to help with the Salesforce query.

The hardest part was logging into Salesforce using OAuth. I had never looked at OAuth before, so I had to learn enough about it to access both the Salesforce and Drift APIs. Authentication to Salesforce is a little bit more difficult as you have to handle tokens that can expire.

Once I was able to authenticate, it’s just a simple SQL-like query to return what I needed for my app — the users name, company, and some details about their RapidMiner product usage. I can query any field we have inside Salesforce.

conn.query("SELECT Id, Email, FirstName, LastName, Company, Academics__c, Total_RM_Studio_starts__c, Last_RM_Studio_usage__c FROM Lead where Email = '" + emailAddress + "'", function(err, result) {

Then I put together the message…

// Build the Drift reply body
body = "<a target='_blank' href=" + Id + ">" + firstName + " " + lastName + "</a><br/>" + "Company: " + Company + "<br/>Total RM Studio Starts: " + totalStudioStarts + "<br/>Last RM Studio Usage: " + lastStudioUsage + "<br/>Academic: " + Academic

And lastly send the message back to the Drift conversion.

// Send the message
return + `/${conversationId}/messages`)
.set('Content-Type', 'application/json')
.set(`Authorization`, `bearer ${DRIFT_TOKEN}`)
.catch(err => console.log(err))

I was glad to learn that the best way to debug things is still to output stuff to the console. That hasn’t changed since my days of debugging BASIC. At one point, I basically had a console.log every other line.

All marketers should learn to program

When I finally made this work (at 2am last night, just like in college!) it felt really great. There’s nothing like building something and seeing it actually work. Heck, someone has even forked my code already (looking at you Brian Whalley, good luck!). I’m sure my code is awful, but it does something useful, and it was really fun to build.

Steve Jobs once said everyone should learn to program a computer, and I agree. It helps you develop logical thinking (hey asyncronous Javascript), and many of the same concepts you learn in programming apply to lots of things we see as marketers. For example, your Marketo, Pardot, or HubSpot segmentation lists are just a complex boolean expression.

The advent of applications like GitHub and Heroku plus the availability of a massive variety of training resources make getting started with programming today easier than ever. Seriously, imagine getting a CS degree today without StackOverflow — yeah, that’s what my generation had to do.

function getContactId(conversationId, callbackFn, orgId) {
// Get the contact from Drift
.get(CONVERSATION_API_BASE + `${conversationId}`)
.set('Content-Type', 'application/json')
.set(`Authorization`, `bearer ${DRIFT_TOKEN}`)
.end(function(err, res){
callbackFn(, conversationId, orgId)
// call back function
function GetContactId(contactId, conversationId, orgId) {
return getContactEmail(contactId, GetContactEmail, conversationId, orgId);

Your Best Startup Strategy is Better Execution

It’s the most wonderful time of the year. No, not that time. The other one. The time where your startup gets together and starts planning for the upcoming year. Some of you have already finished this process, but with my fiscal year ending in January, I’m currently in the midst of it.

That’s why this article from Sarah Hodges on how to run better management team offsites caught my attention.

In particular, I loved this quote from former colleague and friend Thomas Erickson, the former CEO of Acquia.

Make sure that a focus on execution remains front and center. Have a regular “execution” meeting and less regular “strategy” meetings.

— Tom Erickson, CEO, Acquia & Co-Founding Pillar


I’ve often thought that as leaders, we focus too much on strategy and not enough on execution. Peter Drucker once said culture eats strategy for breakfast, and I agree. I think organizations with a culture of getting shit done win. I’ll take a relentless executor over a “strategic marketer” any day.

I know there is a larger debate around strategy vs. execution. At larger companies, planning, strategy, and execution are seperate disciplines. But my experience comes from working at startups who don’t have the luxury of dedicated teams augmented by management consultants, so finding the right balance is important. Focus too much on strategy, and you’ll end up with a lot of great looking PowerPoints but metrics that look more like 〰 or 📉.

Set the Corporate Strategy Annually

Having worked for Tom for ~3 years, I’ve experienced how he found the right balance between strategy and execution.

Once a year in November, Tom brought the entire extended leadership team together to run a three day activity called the Goal Deployment Process, or GDP as we called it. This process was used to identify the most important issues for Acquia to tackle in the upcoming year, prioritized by their potential impact. It was a team exercise where Tom laid out the three year objectives for the company, and we worked backwards into all the things that would need to happen for us to get there.

The outcome of the GDP was a list of ten or so strategic cross-functional initiatives for the upcoming year, with clear definition of ownership, metrics, and specific actions to take. Here’s a look at what one of them looked like:

The robust planning process drove alignment on the most important items to the company. The GDP actions were often tweaked and metrics were redefined, but very rarely did we decide to abandon any completely during the year. Many of the important milestones in Acquia’s growth came as a result of the GDP process.

Drive Daily Execution

But the most important part of the GDP process wasn’t the annual meeting, it was how it affected daily priorities across the company.

Each owner of the GDP was expected to drive their actions across the company. To make an impact, the GDP leader had to structure their day to make sure they were impacting each of the cross-functional actions they owned. Every month, the extended leadership team would get together to review the metrics. We used a spreadsheet with the plan and actual result, with color coding to highlight areas we were performing under expection.

It was simple and effective. Every month, GDP leaders had to provide an honest assessment of how well (or not) we were able to drive improvements. Too much red? You aren’t executing well. Too much green? You probably set your goals wrong. Note that with Tom, I learned (the hard way) that it was better to have more yellow and red than green.

Tom ran the weekly executive leadership meeting at Acquia the same way. Every Monday morning, he brought the exec team together to review a spreadsheet containing a running list of the most important actions across each department in the company. These were usually less cross functional than the GDP, but still important to the success of the company.

This list forced each executive to provide a weekly assessment of where they were at on important deliverables. This simple format wasn’t appropriate for managing the details of the actions, but it was an easy way for the exec team to track progress, and if something were blocked, it could be addressed directly in this meeting.

Anything that fell in the strategy bucket in the weekly meetings was noted, but not addressed as a part of this meeting. Tom kept the meeting all about execution.

You need a strategy for better execution

Here’s the thing: execution is hard and its a grind. But like anything, being great at execution is daily hard work. There were times that I hated the GDP process, and all of the time it took to drive actions and report on results. But looking back on it, I realize that it was one of the main reasons why Acquia grew as fast as it did under Tom’s leadership.

Chances are at your startup, you’ve already got a good enough strategy in place. You know your customers, and you’ve built business model. You’ve got product market fit and a go-to-market in place. But do you have a strategy for getting better at execution?

If you don’t, that should be a big topic for your 2018 planning process.


Lessons Learned Shifting to Product Qualified Leads

The Death of the Marketing Qualified Lead?

A little over a year ago I wrote an article about killing the marketing qualified lead. Here’s a quick refresher: Once upon a time, sales and marketing hated each other. Sales complained about never having enough leads, and marketing complained that sales were terrible at following up and converting them. It was the dark age of marketing.

Enter marketing automation systems, and the birth of the “Marketing Qualified Lead (MQL)”.

The MQL forced the alignment CEOs were looking for, requiring sales and marketing to agree on the traits and actions that described a good lead. It required a good content strategy to guide prospects through the complex B2B buying journey. It drove a consistent set of lead management processes that made it easy to measure conversions at key points. It let marketing prove contribution to revenue.

All of that sounds great, so why kill the MQL?

Marketing automation products made it easy — maybe too easy — to automate processes that would nurture users over time until they became “qualfied” by whatever lead scoring mechanism was put in place, usually by relentlessly emailing users and/or getting them to complete a form(s).

But how qualified is someone just because they filled out a form or opened a few emails? David Cancel of Drift captures the issue:

The MQL and the associated sales and marketing processes around it simply didn’t reflect the modern reality of the empowered buyer. And because marketing teams were goaled on hitting an MQL target, we figured out how to game the system. Most lead scoring models overweight activity, so send enough nurturing emails or run enough webinars and you’ll MQL everyone.

Ultimately the hit the MQL goal at all cost mentallity drove marketers to prioritize hitting short term monthly goals over building the type of sustainable brand magnet that creates demand over years and decades. Hitting the MQL goal became a speadsheet exercize: simply find a low cost acqusition channel and nurture the heck out of leads until they relent errr “qualify”.

Even Account Based Marketing — the darling of everyone’s 2017 marketing budget — doesn’t deviate much from the traditional SaaS 1.0 playbook. It’s certainly a more effective way to reach the people most likely to buy your products, but when you dig into most ABM programs, they are still based on the tried-and-true tactics illustrated by David Cancel in the image above.

Enter the Product Qualified Lead

In August 2016, RapidMiner transitioned away from MQLs. I had just read Why Product Qualified Leads are the Answer to a Failing Freemium Model from Christopher ODonnell and it seemed like a perfect fit for us. RapidMiner has a huge user funnel due to strong brand among data scientists, combined with a freemium / open source distribution model.

RapidMiner Studio is an open source data science platform, with commercial versions that add access to additional data and unlock faster performance. We define a PQL as someone who becomes a user of our flagship RapidMiner Studio product by downloading, confirming their free account, and using it at least once.

Here’s What I Learned About Killing the MQL

In the rest of this post, I’ll share six lessons we’ve learned 12 months into our PQL journey:

Lesson #1: Forget sales and marketing alignment. It starts and ends with product.

“We sold what to whom?”

Remember the telephone game — where you whisper a story to a group of people one person at a time to see how much it changed at the end? A similar phenominon happens in an MQL-dominated pipeline, where the leads you create and opportunities you create from them often look vastly different from the users you build products for. That’s because the “sales and marketing” alignment renaissance triggered by the MQL was actually the wrong goal. We forgot about aligning around product.

The PQL shift at RapidMiner immediately aligned the entire company with product at the core. I get this sounds obvious, but for most companies the sales and marketing tail still wags the product dog. We’re now very clearly product led at RapidMiner, completely aligned around the core personas we serve.

The best part is that it’s now much easier for me to make growth a more important part of the product backlog, and we’ve implemented a series of new product-led initiatives to help increase user acqusition and retention.

Lesson #2: You need a product oriented sales team.

When RapidMiner shifted to PQLs in August 2016, we took a hard line: we only assigned PQLs and “hand raisers” (people who ask us to follow up with them) to our inside sales team. We took almost every form off of our site, and we stopped passing event leads to sales. A year later, we’re still only passing PQLs to sales.

By focusing only on PQLs, the quality of our sales conversations improved. Our target buyer is most often a data scientist — not exactly the easiest persona to qualify. So we put our sales team through extensive product training, supplemented by sales engineers and other internal data science resources. While the PQL model didn’t eliminate the need for technical pre-sales resources, it did change the conversion from selling features (“can RapidMiner do this”) to delivering value (“can you help me create a customer churn model”).

By the way, here’s a great article from Auren Hoffman on the differences between relationship and product oriented teams.

Lesson #3: Selling is helping

We designed our PQL sales process with the expectation that users would want to engage directly with our inside sales team before buying. While we do a few self service transactions each month, most customers engage directly with our sales team. And that’s because users view us as trusted advisors, not transactional sales people. We’re helpful.

Getting someone to respond to an email or a phone call — even an active product user — is still difficult for all the obvious reasons…(looking at you, spammy nurturing MQL email senders). Here’s the approach that works for us. Our email / phone calls go something like this:

Hey there, thanks for trying out RapidMiner. Please use me as a resource as you learn RapidMiner, and don’t hesitate to reach out if you need help.

The key learning for us was that our inside sales team needed to be able to offer something of significant value the user couldn’t get elsewhere — help, with a personal touch. Which brings me to the next lesson.

Lesson #4: How to scale helpfulness

We’ve made it a company goal to put our users first, even the ones who will never pay us. That’s easy to write on a PowerPoint slide, but it has serious implications when you have the type of scale we have at RapidMiner with tens of thousands of users each month.

One of the ways we’ve scaled a culture of helping with a small team is by using Drift, a live chat platform. We get hundreds of questions a day from data scientists, and we try and answer every single one. Some questions we can answer right away, other times we direct users to the RapidMiner Community, where a user can get questions answered in a few hours. But everyone who comes to our chat and talks to us gets a response, it’s the least we can do.

Through a little bit of automation and and tools, we’ve been able to scale helpfulness to thousands of users with just a couple of dedicated resources (shoutout to RapidMiner Community leader Thomas Ott and Yasmine, our Northeastern co-op who runs Drift).

We know that most of the people we help won’t buy our product — at least not right away — and that’s okay. By helping everyone, we are building a sustainable competitive advantage in the form of brand built on the back of a community. I’d like to think this is one of the reasons why RapidMiner is the most popular general data science platform on KDnuggets and the highest rated product on both G2 Crowd and Gartner Peer Reviews.

Lesson #5: Documentation is the new content marketing

Moving to a PQL model required us to think differently about content. Since our top priority was to create active users, our content strategy needed to be focused on user on-boarding and education. Buzzfeed-y listicles and flashy infographics don’t provide much value to a data scientist looking to tune a gradient boosted tree. So we had to think differently about content.

For example, instead of hiring a content marketer we reallocated budget to the product team to hire a documentation writer. Instead of creating lightweight infographics and eBooks, we create product tutorials and educational content.

To measure the effectiveness of these efforts, we look closely at product survivial cohorts to measure the impact of specific actions to see how they impacted product survival over a period of time. Here’s an idea of what we look at (not our data…)

Example of a Product Survival Cohorts

Lesson #6: Pricing and packaging has to scale with value

Lastly, the key the success in a PQL model is that the user has to be able to clearly see the value in moving from free to paid versions. As anyone who works in open source will tell you, this is really hard to get right.

In August of 2016, we launched new pricing and packaging to support our PQL model. We had previously based our pricing on access to specific features, for example, the ability to access an enterprise database like Oracle. But we heard from users that restricting features made it difficult for users to figure out if RapidMiner was the right tool for them, so we made two changes:

  1. We included all features with the Free edition of RapidMiner Studio.
  2. We simplified pricing to scale along two dimensions: the amount of data you can use, and the ability to speed up RapidMiner Studio by using more logical processors.

This model let everyone use all of the features of RapidMiner Studio for free, and as users progressed from prototyping models to putitng them into production, the pricing step-ups made sense.

Bottom line: the PQL model works

Twelve months into our PQL journey, we’ve seen some really great results. We doubled monthly active users even though we spend almost nothing on user acquisiton. We’ve lowered CAC, and increased LTV.

Why? Because our funnel is full of users who have discovered they like RapidMiner well in advance of signing a deal with us, and often before they engage with us at all.

Build great products, help people use them.

(by the way, I really don’t hate the MQL; I just hate that the definition of qualified became so unscientific and drove bad behavior. More on that here.)


Predictive Account Based Inbound Agile Marketing Automation

We’ve hit peak Account Based Marketing.

I’ve received 5 emails this week from vendors extolling the virtues of ABM. I’ve been invited to one dinner, two lunches, received a $50 Amazon gift card, and was told by Marketo that “According to 97% of marketers, account-based marketing (ABM) achieved a higher ROI than other marketing initiatives.”

Everyone is “flipping their funnel” using Account Based Marketing and I’m missing out. Or am I?

Ok. Let’s say I buy that number from Marketo (or something close to it). That still doesn’t mean that Account Based Marketing is right for me. Maybe I’m a proud member of the 3% who think ABM is probably great for lots of organizations, but not for mine? Note that none of those BDRs who invited me to dinner tried to make the case why my company (RapidMiner) would benefit from ABM in the first place.You know you are at the top of a hype cycle when its acceptable to market “the thing” — and not what the thing delivers, or why you should care.

The truth is that really great companies almost never follow someone else’s playbook, at least not verbatim. Salesforce, Slack, Atlassian, and HubSpot, didn’t follow trends, they created them. They went right when everyone else was going left. They were the 3%.

The ABM phenomenon got me thinking about what makes marketers so susceptible to trends? Maybe it’s that we like being marketed to? We appreciate masterfully executed campaigns like Flip my Funnel and Account Based Everything. We certainly like belonging to a tribe — being part of a movement. Ever been to the Dreamforce or Inbound? There’s clearly comfort in numbers. There’s nothing inherently wrong with following a trend, but it’s almost never as easy to pull it off as the trend-makers make it seem.

For example, how many “Appropriate person?” emails have you received+deleted this week? I wouldn’t know, because I created a rule to delete them a long time ago.

The book Predictable Revenue by Aaron Ross taught this approach to outbound sales teams and it absolutely worked for many of the most successful tech companies on the planet — until it didn’t. By then, the trend makers had already moved. on while the trend followers continue to blast templated emails to tuned-out audiences like me to this day.

E-Meet me?

Vendors know the power of trends and the momentum they create. It’s not a coincidence that ABM vendors are teaming up to support the movement. ABM vendor Terminus founded the “Flip My Funnel” community alongside a number of competitors. Even traditionally laggard analysts like Gartner are jumping on board the ABM hype train.

But trends come and go. Ask anyone over 40 how tech marketing worked before AdWords and marketing automation, and it will sound a lot like ABM. Tech marketers in the 90s and early 2000s didn’t have the luxury of the low cost distrubution channels that we have today, so focusing on accounts was the only way to go.

Sure, it was much harder to do ABM then, and the advent of all the new ABM companies make it much easier to do ABM at scale today. Still ABM isn’t new, and it worked then for the same reasons it works now.

Look, I’m not against ABM or any particular marketing tactic. Brandon Redlinger of Engagio lays out some great advice for organizations considering ABM. I’m just against flocking to a trend because everyone else is. There aren’t any growth shortcuts or get rich quick schemes. As I’ve said before, study these trends. Learn from them. Be inspired by them. Implement some of them. But don’t blindly follow them just because you think everyone else is.

And don’t be afraid to be a part of the 3%.


When Marketing Personas Fail

Marketing personas are those fictional people with the clever names like Statistical Stephen at RapidMiner or Marketing Mary at HubSpot. Personas are formed through extensive quantative and qualitative research, and who represent the ideal prospects for your product.

Goofy names aside, complete getting alignment across your target personas and more broadly across your entire customer segmentation strategy is perhaps the single most important thing to get right at a growth tech company. But what often happens is that the whimsical personas created by marketing never really leave the PowerPoint slide they were created on, and aren’t truely embraced by the entire organization as they should be.

See if this complete fictitious scenario sounds familiar at your company…

Marketing is asked to update the core company personas, so they go out and do extensive customer research and come up with three primary personas the company should be targeting. They develop differentiated messaging for each that eloquently connects customer need back to the product. Playbooks are created, sales is enabled, and demand is generated. So far, so good.

But sales isn’t totally bought in. They watched the training session from marketing, and while they found some of the material and persona work to be helpful they have a quota to hit this quarter. So, they continue to go after prospects who don’t really fit the personas defined by marketing, perhaps brands they recognize, companies a rep has sold to in the past, or maybe the sales team is organized by verticals that are no longer a good fit. For whatever reason, sales isn’t fully aligned so they continue to chase a different set of targets than were identified by marketing.

Meanwhile, customer success was unfortunately not involved in the marketing persona definitions. Had they been, they would have pointed out a fatal flaw in the persona development: that one of the target personas has a high churn rate. The persona looks like a great fit from an customer acquisition perspective, but when you follow the persona through renewal and expansion, signing them up is just not worth the effort.

Finally, product and engineering continued to build product and shape the roadmap through entirely different conversations with sales, marketing, and customer success. They may even have their own persona definitions outside of marketing.

None of this happens at your company because you are fully aligned, right? Yeah, probably not.

As I mentioned before, I believe that getting alignment across the entire organization on customer segmentation is the single most important thing a tech company can do to scale. And the CMO needs to be the change agent that gets everyone — marketing, sales, product, engineering, customer success — on the same page, and keeps them there.

Brian Halligan of HubSpot tells a great story of misalignment and the resulting “optionality tax” paid by companies who aren’t fully aligned in the story of Mary Marketer.

In the early days of HubSpot, they sold to two primary personas: Owner Ollie, who represented HubSpot’s really small (< 10 employees) market segment and Mary Marketer, who represented someone in the marketing department of a larger SMB company.

For years at HubSpot we debated our target market persona. We had one camp that wanted to build our offering for Owner Ollie, a small business owner with less than 10 employees and no full time marketer. We had another camp that wanted to build our offering for Mary Marketer, a marketing manager who worked in a company between 10 and 1000 employees.

I was a HubSpot customer during the Owner Ollie days. The product was a jumbled mess of SEO and social tools with a touch of email marketing, landing pages, and reporting mixed-in. You could see there was massive potential, and the product was iterating fast. But still it was confusing to me, because as the CMO of a 250 person tech company at the time I had a much different set of needs than the owner of a plumbing supply store somewhere in the Midwest.

For HubSpot to thrive, they had to choose and eliminiate the optionality tax. And they chose Mary. Here’s how Brian Halligan describes the magic that came through focus:

By picking Mary, our marketers could now build content that attracted her and stopped watering our blog (and other assets) down with business owner content.

By this time, HubSpot already had an army of content creators, and now they were entirely focused on the needs of Mary. This let HubSpot distance themselves in the crowded space of content marketing best-practices.

By picking Mary, our sales reps only were rotated leads from companies between 10 and 1000 employees (lead scoring works, btw), honed their value proposition on how to help Mary grow, and largely forgot about Ollie.

Sales immediately got onboard with the Mary decision, simplying their qualification and narrowing their messaging approach.

By picking Mary, our product folks could laser focus on delighting Mary and stop splitting the baby on the UI and feature set they were building for both. If someone suggested an Ollie feature, they’d simply say “no” and move on — no more hand wringing.

The product team could focus on improving the user experience and addressing the feature gaps that prevented HubSpot from selling to more Mary’s.

And lastly because they were so focused on the needs of Mary, it led to a huge decrease in customer churn and got HubSpot over the magical 100% revenue retention number that is so important for high growth SaaS companies.

The Marketing Mary decision at HubSpot fully aligned marketing, sales, customer success and product. The results speak from themselves: every single metric went up through complete alignment and the focus that came with it:

So how do you get a company fully aligned and keep them there? It takes hard work. Personas aren’t a “one and done” activity. The can’t just exist in a PowerPoint slide or a printed picture that hangs on a wall. The CMO must constantly keep them updated and relentlessly focus on making sure the entire organization remains in agreement on the qualitative and quantatative measurements of what makes a good persona for your company.

This elevates the CMO into a much more strategic role in the organization, something Dave Kellogg from Host Analyics touches on in his post The Evolution of Marketing Thanks to SaaS.

Tomorrow, as more marketers will be measured on the health of the overall ARR pool, they will be focused on cost-effectively generating not just opportunities-that-close but opportunities that turn into the best long-term customers. (This quadrant helps you do just that.)

And that’s exactly where we need to be.

Here’s some additional reading the topic of segmentation and personas:

How to Design Marketing Campaigns: The Importance of Market Segmentation by Myk Pono

The Importance Of Segmentation For Your SaaS Startup by Tom Tunguz


How to Send Almost Anything to Microsoft Teams using Webhooks and Zapier

Microsoft Teams is set to take on the new breed of collaboration apps like Slack and Hipchat.

Each collaboration area is called a Team — and each Team can have multiple channels. Within a channel you have all the basics like chat (with emojis!) and document sharing, but it really gets interesting when you connect Teams to other applications. Microsoft Teams launched with a bunch of connectors to popular apps like Trello, Jira, Zendesk, and many more — but it’s really easy to integrate almost anything with Teams using Zapier and Webhooks.

Webhooks are a type of event-driven API that lets you push and pull information between different types of apps. Lots of popular applications support Webhooks, including both Slack and Microsoft Teams. My objective was to create a channel for my marketing team at RapidMiner that would let me aggregate customer feedback — the NPS data we capture from SurveyMonkey.

The first step is to add a Webhook to your Microsoft Teams channel. You’ll give the channel a name and a custom logo. Then your Webhook will provide a unique URL to receive the information you push from other apps like Zapier:

Now that you have a URL for your Webhook, you can use Zapier to start pushing information into your Microsoft Teams channel.

Starting with my SurveyMonkey example, you’ll use the built-in SurveyMonkey Zapier trigger to start the Zapier process.

Then you’ll use “Webhooks by Zapier” to publish the chat transcript to Microsoft Teams. I prefer to use the Custom Request type for its flexibility. More on that later.

Now select the POST method and then add the URL of your Webhook. The Data section is where you’ll configure the messaging what you want to send. Since we selected Custom Request, we need to format the JSON using the Microsoft Teams Card format.

Here’s what I use:

 “summary”:”NPS Survey Response”, 
 “activityTitle”:”<b>NPS Survey Response</b>”,
 “activityImage”: “"
 “name”:”NPS Score”,
 “name”:”Needed for a Higher Rating”,
 “name”:”View in Salesforce”,

This creates a card in Microsoft Teams using your webhook.

Microsoft Teams Card

These are really simple examples, and you can do much more including adding markdown formatting and action buttons.


Stop The Lead Scoring Madness

I recently came across a lead scoring article on the Mattermark blog that reminded me of a post I’ve been wanting to write for a while: the way nearly everyone is doing lead scoring is totally wrong.

In the modern era of data-driven marketing where marketers boast about being able to connect every penny of marketing spend to revenue, we are using nothing more than gut instinct and anecodotal feedback to figure out the best leads to pass to sales. The main culprit for this data-driven nightmare is the way lead scoring is implemented in marketing automation systems like Marketo and Pardot.

For example, here’s how lead scoring currently works in Pardot: at the system level, you can assign points to specific actions like email opens, page views, form submissions etc.

You can also add points as a completion action to things like form submissions to weigh specific types of actions and content more heavily than others; like maybe an analyst report or a specific webinar.

The idea of course is that now we can pass the highest scored leads to sales, and that there is a correlation between the lead score and likelihood to buy.

But as Mike Volpe points out the way we assign “points” makes absolutely no sense, yet somehow it has become accepted as the gospel of modern lead management best practices.

It turns out that accumulating points is actually a terrible way of predicting an outcome, leading to marketing passing horrible leads to sales; like the serial webinar attenders (you know, those people that come to EVERY SINGLE WEBINAR you run) and habitual email openers— instead of the people most likely to actually buy your product.

It’s likely that much of the benefit of manual lead scoring comes from the placebo effect of telling sales that you are only passing them the best leads. So in turn they work them more exhausitively, and you’ll get better results than the unscored leads of the past that sales would mostly ignore. But that doesn’t mean they are better quality leads. They could actually be worse and you’d never know it.

Don’t believe me? Here’s an A|B experiment you should run immediately:

Take 10–25% of your lower scored leads using your current lead scoring model and start passing them to sales. The leads should appear like the rest of the higher scored leads(MQLs) you are currently passing to sales, and don’t tell anyone so there is no bias on how the leads are worked. Make sure you can identify leads in the experiment group against the control group.

Run the test for as long as it takes for your test to hit statistical significance, which will be based on how many of your leads convert into whatever metric you are using to gauge the health of leads e.g. SAL, SQL, etc.

In a few weeks after your sales team finishes following up on the batch of leads, compare the two groups using whatever conversion metric you are using. Now did the higher scored leads convert better than the lower scored leads? When I ran this experiment at my last company they didn’t — and the lower scored leads converted at a slightly better rate than the higher scored leads. Oops.

Building a lead scoring model by assigning random points to actions and attributes of leads is just about as un-data driven as it gets.

The solution is to build a lead scoring model based on a proven mathemetical model that will really tell you the features — or attributes of a lead — that have a statistically significant corrleation with the outcome you are trying to predict e.g. people who buy your product. Unlike the randomness of assigning points to specific behaviors and attributes of a lead, a predictive model will give you a statistical measure of which features actually matter the most.

There are predictive marketing products that will do this for you — like Infer, Mintigo, and Everstring. But they come with a hefty price and are overkill for many companies. You can get started for free using tools like RapidMiner Studio or even good old Microsoft Excel.

The first problem you’ll need to tackle is getting access to clean data. Predictive lead scoring is mostly a “wide data” problem; you want as much data as possible about your users to uncover the specific features that have the most impact on closing deals. This means combining data from your CRM and marketing automation system, hopefully your product, and perhaps even external APIs. But chances are the data from your CRM and marketing automation system is clean enough to get started.

Once you have the data, you’ll model it using a variety of machine learning techniques, and most likely you’ll start with some form of regression. I won’t attempt to explain the math behind regression here, but if you are interested here’s a good overview from RapidMiner founder Dr. Ingo Mierswa.

Here are two resources for getting started with either Excel or RapidMiner Studio for free:

  1. This video from Ilya Mirman, the VP of Marketing at OnShape, shows you how to build a predictive lead scoring model in Excel using linear regression. Okay fine, Ilya is a Stanford-educated engineer and MIT MBA, but the approach he outlines is something even mere mortal Excel power users could figure out with a little bit of effort. If you can do a VLOOKUP, you can try this.
  2. With a bit more effort you could download RapidMiner Studio, which will help you both prepare your data and build your predictive model. RapidMiner Studio provides far more options for modeling than Excel, and will help you get a more accurate predictive model. The current lead scoring model we’re using at RapidMiner was built by my colleague Thomas Ott, and here’s a webinar where he walks through how we built the model complete with a sample of our data and the model we use today.
Lead Scoring in RapidMiner Studio

Regardless of which tool you use, you’ll get a result that looks something like this — a weighted view of the lead attributes that actually result in more deals.

For example, in our RapidMiner lead scoring model. the number of product starts was the most important factor in predicting a purchase, and leads from Netherlands were more likely to close than any other country.

We used RapidMiner Server to automate adding our predictive lead score to Salesforce, but even a simple Excel output of your weighted attributes is enough for you to make those marketing automation point assignments using mathetically-sound principals instead of educated guesses. And that’s a huge step in the right direction, and you didn’t even have to spend a bunch of money on a predictive lead scoring product to make it happen.

Lastly, I highly recommend everyone read Myk Pono’s opus on How To Design Lead Nurturing, Lead Scoring, and Drip Email Campaigns. Myk goes into this and many more topics in great depth.

Whew, feels good to get that post off my chest 🙂