Place Your Bets
I've met with a number of companies recently, as I seek to pitch my career tent in a new campground, and have noticed one common theme. These companies are looking for ways to break away from their competitors, especially as economic conditions improve and (as it were) the tide begins to rise for all boats.
And the news media, eager to report on any good news, has noted the uptick in mergers and acquisitions as one vehicle for growth. So you have Dell and Perot Systems, Adobe and Omniture, Oracle and whoever's in their sights now that's not named salesforce.com.
But the companies I've spoken with are focused on the more common route to growth: innovation arising from R&D investments. So I was interested to read an article in the latest issue of MIT Sloan Management Review that touted a tool (always a key word for MIT folks) for measuring (ditto) and thus managing R&D effectiveness. The author, Alexander Kandybin, has labeled this tool "ROI2"--Return on Innovation Investment. [Here is where I normally make some snarky remark about consultants (Kandybin is with Booz & Co.) and their penchant to rename and reposition ideas, but as I have nothing witty to offer, I'll leave that to the reader, as an exercise (my favorite phrase from my Calculus book, but I digress).]
Kandybin advocates building an "innovation effectiveness curve", in which "[we] plot annual spending on innovation projects against the financial returns from those projects, measured as a projected internal rate of return."
From here, managers can quickly see which (handful) of projects are the "stars", which are losers (at the tail of the distribution) and which live in the grand middle of good ideas that generate decent but not spectacular returns.
So far, so good. And this is exactly the kind of tool I had been looking for as a fresh-faced graduate from MIT Sloan, eager to put my knowledge to work. So how does the idea stack up in the real world?
Imagine If You Will
First, recognize the "simplifying assumption" here. In order to calculate an internal rate of return (IRR), you have to estimate revenues attributable to the investment over the lifetime of the resulting innovation. So the first requirement is that you actually think about how much the innovation is going to make (top line) or save (bottom line) for the company. Some companies are more disciplined about doing this than others. Developing these revenue estimates was a part of developing a "Commercial Specification" (think forecast plus product requirements) when I was at Nortel. At other, typically smaller, companies we didn't take this step.
Second, this calculation or estimation of an IRR is a projection--you haven't made the money yet. So your IRR for the project is subject to whatever estimation errors you might make. If you're working in an industry with stable competitive and market forces, you might look to the measured returns of similar projects to better estimate an IRR for this project. The problem for me is that I've never worked in such a stable industry, so looking to the past is of limited use in predicting the future.
Another challenge with coming up with an estimated IRR is that it's subject to being "gamed". Once you know "big IRR is better than small IRR" there's a temptation to fit the estimate to the desired result. Nortel had a fairly rigorous R&D allocation process, that tried to match "top down" views about how much aggregate investment should be made in each line of business with "bottom up" views about what revenues and/or cost savings could be "rolled up" from individual R&D projects. These "bottom up" views were typically discounted, given that the returns across all proposed projects were on the order of 10X the estimated returns for the business unit.
Making ROI2 "Fit for Purpose"
These considerations don't make the ROI2 tool bad. It just has to be adapted so as to be (in the words of my former boss, Joel Wachtler) "fit for purpose". Here are some examples.
- Track the revenues, even if it's in the aggregate vs. attributable to individual projects. When I ran Nortel's ISDN & Networking Group, I had the Finance people put together reporting on revenues arising from my group's hardware and software. I might not have known how an individual project paid out, but I knew what the total R&D spend delivered in terms of hard cash.
- Track the opportunities leveraged. Sometimes you have to deliver a feature or enhancement to close a deal. Follow up and make sure you got the business. At iPass, I had a slew of feature enhancements that had to be delivered to steal a marquee customer from a competitor. You can be sure I followed up to make sure they signed, and kept track of how much revenue they were generating.
- Look for a small bet that will prove the case. At one of our iPass partner meetings, someone suggested a utility that could live on a user's laptop, and tell them the location of the nearest iPass Wi-Fi hotspot. The idea had been tabled with the CFO, but was held up for want of a business justification. I suggested we commit a quarter's worth of R&D and licensing costs to set up and distribute the utility, and see what happened with Wi-Fi usage. The revenue benefits were obvious, and the "hotspot finder" became a huge hit.
- Look for "traceability" of R&D efforts. You can usually understand what part of the product, service, or system the R&D effort is touching. So you can ask, "if this part is improved, what's likely to happen?" Creating standard pricing and contract vehicles for one service at CGNet meant that we could turn proposals around in two hours, instead of two days.
- Verify that your R&D projects aren't driven by the need to keep a group of R&D folks busy. Marty Cagan, of the Silicon Valley Product Group, has written about this in his Feed the Beast. One way to get at this is to ask yourself: "if I was starting this company over, would I hire an R&D staff with this mix of skillsets?". Maybe you once had a need for a large hardware engineering team; is that still the case?
Maybe you're in a situation where the ROI2
tool would work. If so, that's fantastic. If not, you can still adapt the idea of evaluating projects according to their projected contribution to make sure that you're not filling the R&D pipe with "polishing the doorknob" types of projects. Some of these ideas may help you adapt such a rigorous evaluation method to the messy realities of your innovation circumstances.