Reducing the risk of Innovation
Though we can’t describe it in words, or tell someone how to do it, we all know innovation is good. Why is it good? Look at the causal chain of actions that create a good economy, and you’ll find innovation is the first link.
When innovation happens, a new product is created that does something that no other product has done before. It provides a new function, it has a new attribute that is pleasing to the eye, it makes a customer more money, or it simply makes a customer happy. It does not matter which itch it scratches, the important part is the customer finds it valuable, and is willing to pay hard currency for it. Innovation does something amazing, it results in a product that creates value; it creates something that’s worth more than the sum of its parts. Starting with things dug from the ground or picked from it – dirt (steel, aluminum, titanium), rocks (minerals/cement/ceramics), and sticks (wood, cotton, wool), and adding new thinking, a product is created, a product that customers pay money for, money that is greater than the cost of the dirt, rocks, sticks, and new thinking. This, my friends, is value creation, and this is what makes national economies grow sustainably. Here’s how it goes.
Customers value the new product highly, so much so that they buy boatloads of them. The company makes money, so much so stock price quadruples. With its newly-stuffed war chest, the company invests with confidence, doing more innovation, selling more products, and making more money. An important magazine writes about the company’s success, which causes more companies to innovate, sell, and invest. Before you know it, the economy is flooded with money, and we’re off to the races in a sustainable way – a way based on creating value. I know this sounds too simplistic. We’ve listened too long to the economists and their theories – spur demand, markets are efficient, and the world economy thing. This crap is worse than it sounds. Things don’t have to be so complicated. I wish economists weren’t so able to confuse themselves. Innovate, sell, and invest, that’s the ticket for me.
Innovation – straightforward, no, easy, no. Innovation is scary as hell because it’s risky as hell. The risk? A company tries to develop a highly innovative product, nothing comes out the innovation tailpipe, and the company has nothing for its investment. (I can never keep the finance stuff straight. Does zero return on a huge investment increase or decrease stock price?) It’s the tricky risk thing that gets in the way of innovation. If innovation was risk free, we’d all be doing it like voting in Chicago – early and often. But it’s not. Although there is a way to shift the risk/reward ratio in our favor.
After doing innovation wrong, learning, and doing it less wrong, I have found one thing that significantly and universally reduces the risk/reward ratio. What is it?
Know you’re working on the right problem.
Work on the right problem? Are you kidding? This is the magic advice? This is the best you’ve got? Yes.
If you think it’s easy to know you’re working on the right problem, you’ve never truly known you were working on the right problem, because this type of knowing is big medicine. Innovation is all about solving a special type of problem, problems caused by fundamental conflicts and contradictions, things that others don’t know exist, don’t know how to describe, or define, let alone know how to eliminate. I’m talking about conflicts and contradictions in the physics sense – where something must be hot and cold at the same time, something must be big while being small, black while white, hard one instant, and soft the next. Solve one of those babies, and you’ve innovated yourself a blockbuster product.
In order to know you’re working on the right problem (conflict or contradiction), the product is analyzed in the physics sense. What’s happening, why, where, when, how? It’s the rule (not the exception) that no one knows what’s really going on, they only think they do. Since the physics are unknown, a hypothesis of the physics behind the conflict/contradiction must be conjured and tested. The hypothesis must be tests analytically or in the lab. All this is done to define the problem, not solve it. To conjure correctly, a radical and seemingly inefficient activity must be undertaken. Engineers must sit at their desk and think about physics. This type of thinking is difficult enough on its own and almost impossible when project managers are screaming at them to get off their butts and fix the problem. As we know, thinking is not considered progress, only activity is.
After conjuring the hypothesis, it’s tested to prove or disprove. If dis-proven, back to the desk for more thinking. If proven, the conflict/contradiction behind the problem is defined, and you know you’re working on the right problem. You have not solved it, you’ve only convinced yourself you’re working on the right one. Now the problem can be solved.
Believe it or not, solving is the easy part. It’s easy because the physics of the problem are now known and have been verified in the lab. We engineers can solve physics problems once they’re defined because we know the rules. If we don’t know the physics rules off the top of our heads, our friends do. And for those tricky times, we can go to the internet and ask Google.
I know all this sounds strange. That’s okay, it is. But it’s also true. Give your engineers the tools, time and training to identify the problems, conflicts, and contradictions and innovation will follow. Remember the engineering paradox, sometimes slower is faster. And what about those tools for innovation? I’ll save them for another time.
Looking for the next evolution of lean? Look back.
Many have achieved great success with lean – it’s all over the web. Companies have done 5S, standard work, value stream mapping, and flow-pull-perfection. Waste in value streams has reduced from 95% to 80%, which is magical; productivity gains have been excellent; and costs have dropped dramatically. But the question on everyone’s mind – what’s next? The blogs, articles, and papers are speculating on the question and proposing theories, all of which have merit. But I think we’re asking the wrong question.
Instead of looking forward for the next evolution of lean, we should look back. We must take a fundamental, base-level look at our factories, and ask what did we miss? We must de-evolve our thinking about our factories, and break down their DNA – like mapping the factory genome.
Though lean has achieved radical success, it has not achieved fundamental reduction in factory complexity. Heresy? Let me explain. Lean helped us migrate from batch building to single piece flow. With batch building, a group of parts are processed at machine A, then, when all are finished, the whole family moves to machine B. With single piece flow, a part is processed at machine A then she moves, without her sisters, directly to machine B, resulting in big savings. But in both cases, the fundamental part flow, a surrogate for factory complexity, remains unchanged – parts move from machine A to machine B. Lean did not change it. Lean has taken the bends out of our factory flow and squeezed machines together, but that’s continuous improvement. We’ve got good signals, we’ve got cell-based metrics, and 15 minute pitches. Again, continuous improvement. But what about discontinuous improvement? How can we fundamentally reduce factory complexity?
Factories are what they are because of the parts flowing through them.
Factory flow and complexity are governed by the genetics of the parts. In that way, parts are the building blocks of the factory genome. From the machines and tools to the people, handling equipment, and the incoming power – they’re all shaped by the parts’ genetics. Heavy parts, heavy duty cranes; complex parts, complex flows; big parts, big factories. When we want to make a fundamental change in bacteria to make a vaccine, we change the genetics. When we want to make a fruit immune to a natural enemy or resistant to cold of an unnatural habitat, we change the genetics. So, it follows, if fundamental change in factory complexity is the objective, the factory genome should change.
Don’t try to simplify the factory directly, change the parts to let the factory simplify itself.
Discontinuous reduction of factory complexity is the result of something – changing the products that flow through the factory. Only design engineers can do that. Only design engineers can eliminate features on the design so machine B is not required. Only design engineers can redesign the product to eliminate the part altogether – no more need for machine A or B. In both cases, the design engineer did what lean could not.
Lean is a powerful tool, and I’m an advocate. But we missed an important part of the lean family. We drove right by. We had the chance to engage the design community in lean, but we did not. Let’s get in the car, drive back to the design community, and pick them up. We’ll tell them anything they want to hear, just as long as they get in the car. Then, as fast as we can, we’ll drive them to the lean pool party. Because as Darwin knew, diversity is powerful, powerful enough to mutate lean into a strain that can help us survive in the future.
Engineers and Change?
As an engineering leader I work with design engineers every day. I like working with them, it’s fun. It’s comfortable for me because I understand us. Yes, I am an engineer.
I know what we’re good at, and I know we’re not good at. I’ve heard the jokes. Some funny, some not. But when engineers and non-engineers work well together, there’s lots of money to be made. I figure it’s time to explain how engineers tick so we can make more money. An engineer explaining engineers, to non-engineers – a flawed premise? Maybe, but I’ll roll the dice.
Everyone knows why design engineers are great to have around. Want a new product? Put some design engineers on it. Want to solve a tough technical problem? Put some design engineers on it. Want to create something from nothing? Design engineers. Everyone also knows we can be difficult to work with. (I know I can be.) How can we be high performing in some contexts and low performing in others? What causes the flip between modes? Understanding what’s behind this dichotomy is the key to understanding engineers. What’s behind this? In a word, “change”. And if you understand change from an engineer’s perspective, you understand engineers. If you remember just one sentence, here it is:
To engineers, change equals risk, and risk is bad.
Why do we think that way? Because that’s who we are; we’re walking risk reduction machines. And that’s good because in this time of doing more, doing it with less, and doing it faster, companies are taking more risk. Engineers make sure risk is always part of the risk-reward equation.
The best way to explain how engineers think about change and risk is to give examples. Here three examples.
Changing a drawing for manufacturing
Several months after product launch, with things running well, there is a request to change an engineering print. Change the print? That print is my recipe. I know how it works and when it doesn’t. That recipe works. My job is to make sure it works, and someone wants to change it? I’m not sure it will work. Did I tell you it’s my job to make sure it works? I don’t have time to test it thoroughly. Remember, when I say it will work, you expect that it will. I’m not sure the change will work. I don’t want to take the risk. Change is risk.
Changing the specification
This is a big one. Three months into a new product development project, the performance specification is changed, moving it north into unknown territory. The customer will benefit from the increased performance, we understand this, but the change created risk. The knowledge we created over the last three months may not be relevant, and we may have to recreate it. We want to meet the new specification (we’re passionate about product and technology), but we don’t know if we can. You count on us to be sure that things will work, and we pride ourselves on our ability to do that for you. But with the recent specification change, we’re not sure we can get it done. That’s risk, that’s uncomfortable for us, and that’s the reason we respond as we do to specification changes. Change is risk.
Changing how we do product development
This is the big one. We have our ways of doing things and we like them. Our design processes are linear, rational, and make sense (to us). We know what we can deliver when we follow our processes; we know about how long it will take; and we know the product will work when we’re done. Low risk. Why do you want us to change how we do things? Why do you want to add risk to our processes? All we’re trying to do is deliver a great product for you. Change is risk.
Engineers have a natural bias toward risk reduction. I am not rationalizing or criticizing, just explaining. We don’t expect zero risk; we know it’s about risk optimization and not risk minimization. But it’s important to keep your eye on us to make sure our risk pendulum does not swing too far toward minimization. The great American philosopher Mae West said, “Too much of a good thing can be wonderful.” But that’s not the case here.
When it comes to engineers and risk reduction, too much of a good thing is not wonderful.
Fasteners Can Consume 20-50% of Assembly Labor
The data-driven people in our lives tell us that you can’t improve what you can’t measure. I believe that. And it’s no different with product cost. Before improving product cost, before designing it out, you have to know where it is. However, it can be difficult to know what really creates cost. Not all parts and features are created equal; some create more cost than others, and it’s often unclear which are the heavy hitters. Sometimes the heavy hitters don’t look heavy, and often are buried deeply within the hidden factory.
Measure, measure, measure. That’s what the black belts say. However, it’s difficult to do well with product cost since our costing methods are hosed up and our measurement systems are limited. What do I mean? Consider fasteners (e.g., nuts, bolts, screws, and washers), the product’s most basic life form. Because fasteners are not on the BOM, they’re not part of product cost. Here’s the party line: it’s overhead to be shared evenly across all the products in a socialist way. That’s not a big deal, right? Wrong. Although fasteners don’t cost much in ones and twos, they do add up. 300-500 pieces per unit times the number of units per year makes for a lot of unallocated and untracked cost. However, a more significant issue with those little buggers is they take a lot of time attach to the product. For example, using standard time data from DFMA software, assembly of a 1/4″ nut with a bolt, locktite, a lockwasher, and cleanup takes 50 seconds. That’s a lot of time. You should be asking yourself what that translates to in your product. To figure it out, multiply the number nut/bolt/washer groupings by 50 seconds and multiply the result by the number of units per year. Actually, never mind. You can’t do the calculation because you don’t know the number of nut/bolt/washer combinations that are in your product. You could try to query your BOMs, but the information is likely not there. Remember, fasteners are overhead and not allocated to product. Have you ever tried to do a cost reduction project on overhead? It’s impossible. Because overhead inflicts pain evenly to all, no one is responsible to reduce it.
With fasteners, it’s like death by a thousand cuts.
The time to attach them can be as much as 20-50% of labor. That’s right, up to 50%. That’s like paying 20-50% of your folks to attach fasteners all day. That should make you sick. But it’s actually worse than that. From Line Design 101, the number of assembly stations is proportional to demand times labor time. Since fasteners inflate labor time, they also inflate the number of assembly stations, which, in turn, inflates the factory floor space needed to meet demand. Would you rather design out fasteners or add 15% to your floor space? I know you can get good deals on factory floor space due to the recession, but I’d still rather design out fasteners.
Even with the amount of assembly labor consumed by fasteners, our thinking and computer systems are blind to them and the associated follow-on costs. And because of our vision problems, the design community cannot be held accountable to design out those costs. We’ve given them the opportunity to play dumb and say things like, “Those fastener things are free. I’m not going to spend time worrying about that. It’s not part of the product cost.” Clearly not an enlightened statement, but it’s difficult to overcome without cost allocation data for the fasteners.
The work-around for our ailing thinking and computer-based cost tracking systems is simple: get the design engineers out to the production floor to build the product. Have them experience first hand how much waste is in the product. They’ll come back with a deep-in-the-gut understanding of how things really are. Then, have them use DFMA software to score the existing design, part-by-part, feature-by-feature. I guarantee everyone will know where the cost is after that. And once they know where the cost is, it will be easy for them to design it out.
I have data to support my assertion that fasteners can make up 20-50% of labor time, but don’t take my word for it. Go out to the factory floor, shut your eyes and listen. You’ll likely hear the never ending song of the nut runners. With each chirp, another nut is fastened to its bolt and washer, and another small bit of labor and factory floor space is consumed by the lowly fastener.
DFA and Lean – A Most Powerful One-Two Punch
Lean is all about parts. Don’t think so? What do your manufacturing processes make? Parts. What do your suppliers ship you? Parts. What do you put into inventory? Parts. What do your shelves hold? Parts. What is your supply chain all about? Parts.
Still not convinced parts are the key? Take a look at the seven wastes and add “of parts” to the end of each one. Here is what it looks like:
- Waste of overproduction (of parts)
- Waste of time on hand – waiting (for parts)
- Waste in transportation (of parts)
- Waste of processing itself (of parts)
- Waste of stock on hand – inventory (of parts)
- Waste of movement (from parts)
- Waste of making defective products (made of parts)
And look at Suzaki’s cartoons. (Click them to enlarge.) What do you see? Parts.
Take out the parts and the waste is not reduced, it’s eliminated. Let’s do a thought experiment, and pretend your product had 50% fewer parts. (I know it’s a stretch.) What would your factory look like? How about your supply chain? There would be: fewer parts to ship, fewer to receive, fewer to move, fewer to store, fewer to handle, fewer opportunities to wait for late parts, and fewer opportunities for incorrect assembly. Loosen your thinking a bit more, and the benefits broaden: fewer suppliers, fewer supplier qualifications, fewer late payments; fewer supplier quality issues, and fewer expensive black belt projects. Most importantly, however, may be the reduction in the transactions, e.g., work in process tracking, labor reporting, material cost tracking, inventory control and valuation, BOMs, routings, backflushing, work orders, and engineering changes.
However, there is a big problem with the thought experiment — there is no one to design out the parts. Since company leadership does not thrust greatness on the design community, design engineers do not have to participate in lean. No one makes them do DFA-driven part count reduction to compliment lean. Don’t think you need the design community? Ask your best manufacturing engineer to write an engineering change to eliminates parts, and see where it goes — nowhere. No design engineer, no design change. No design change, no part elimination.
It’s staggering to think of the savings that would be achieved with the powerful pairing of DFA and lean. It would go like this: The design community would create a low waste design on which the lean community would squeeze out the remaining waste. It’s like the thought experiment; a new product with 50% fewer parts is given to the lean folks, and they lean out the low waste value stream from there. DFA and lean make such a powerful one-two punch because they hit both sides of the waste equation.
DFA eliminates parts, and lean reduces waste from the ones that remain.
There are no technical reasons that prevent DFA and lean from being done together, but there are real failure modes that get in the way. The failure modes are emotional, organizational, and cultural in nature, and are all about people. For example, shared responsibility for design and manufacturing typically resides in the organizational stratosphere – above the VP or Senior VP levels. And because of the failure modes’ nature (organizational, cultural), the countermeasures are largely company-specific.
What’s in the way of your company making the DFA/lean thought experiment a reality?
DFA Saves More than Six Sigma and Lean
I can’t believe everyone isn’t doing Design for Assembly (DFA), especially in these tough economic times. It’s almost like CEOs really don’t want to grow stock price. DFA, where the product design is changed to reduce the cost of putting things together, routinely achieves savings of 20-50% in material cost, and the same for labor cost. And the beauty of the material savings is that it falls right to the bottom line. For a product that costs $1000 with 60% material cost ($600) and 10% profit margin ($100), a 10% reduction in material cost increases bottom line contribution by 60% (from $100 to $160). That sounds pretty good to me. But, remember, DFA can reduce material cost by 50%. Do that math and, when you get up off the floor, read on.
Unfortunately for DFA, the savings are a problem – they’re too big to be believed. That’s right, I said too big. Here’s how it goes. An engineer (usually an older one who doesn’t mind getting fired, or a young one who doesn’t know any better) brings up DFA in a meeting and says something like, “There’s this crazy guy on the web writing about DFA who says we can design out 20-50% of our material cost. That’s just what we need.” A pained silence floods the room. One of the leaders says something like, “Listen, kid, the only part you got right is calling that guy crazy. We’re the world leaders in our field. Don’t you think we would have done that already if it was possible? We struggle to take out 2-3% material cost per year. Don’t talk about 20-50% because is not possible.” DFA is down for the count.
Also unfortunate is the name – DFA. You’ve got to admit DFA doesn’t roll off the tongue like six sigma which also happens to sound like sex sigma, where DFA does not. I think we should follow the lean sigma trend and glom some letters onto DFA so it can ride the coat tails of the better known methodologies. Here are some letters that could help:
Lean DFA; DFA Lean; Six Sigma DFA; Six DFA Sigma (this one doesn’t work for me); Lean DFA Sigma
Its pedigree is also a problem – it’s not from Toyota, so it can’t be worth a damn. Maybe we should make up a story that Deming brought it to Japan because no one in the west would listen to him, and it’s the real secret behind Toyota’s success. Or, we can call it Toyota DFA. That may work.
Though there is some truth to the previous paragraphs, the main reason no one is doing DFA is simple:
No one is asking the design community to do DFA.
Here is the rationalization: The design community is busy and behind schedule (late product launches). If we bother them with DFA, they may rebel and the product will never launch. If we leave them alone and cross our fingers, maybe things will be all right. That is a decision made in fear, which, by definition, is a mistake.
The design community needs greatness thrust upon them. It’s the only way.
Just as the manufacturing community was given no choice about doing six sigma and lean, so should the design community be given no choice about doing DFA.
No way around it, the first DFA effort is a leap of faith. The only way to get it off the ground is for a leader in the organization to stand up and say “I want to do DFA.” and then rally the troops to make it happen.
I urge you to think about DFA in the same light as six sigma or lean: If your company had a lean or six sigma project that would save you 20-50% on your product cost, would you do it? I think so.
Who in your organization is going to stand up and make it happen?
Innovation, Technical Risk, and Schedule Risk
There is a healthy tension between level of improvement, or level of innovation, and time to market. Marketing wants radical improvement, infinitely short project schedules, and no change to the product. Engineers want to sign up for the minimum level of improvement, project schedules sufficiently long to study everything to death, and want to change everything about the new product. It’s healthy because there is balance – both are pulling equally hard in opposite directions and things end up somewhere in the middle. It’s not a stress-free environment, but it’s not too bad. But, sometimes the tension is unhealthy.
There are two flavors of unhealthy tension. First is when engineering has too much pull; they (we) sandbag on product performance and project timelines and change the design willy-nilly simply because they can (and it’s fun). The results are long project timelines, highly innovative designs that don’t work well, a lack of product robustness, and a boatload of new parts and assemblies. (Product complexity.) Second is when Marketing has too much pull; they ask for radical improvement in product functionality with project timelines too short for the level of innovation, and tightly constrain product changes such that solutions are not within the constraints. The results are long project timelines and un-innovative designs that don’t meet product specifications. (The solutions are outside the constraints.) Both sides are at fault in both scenarios. There are no clean hands.
What are the fundamentals behind all this gamesmanship? For engineering it’s technical risk; for marketing it’s schedule risk. Engineering minimizes what it signs up for in order to reduce technical risk and petitions for long project timelines to reduce it. Marketing minimizes product changes (constraints) to reduce schedule risk and petitions for short project timelines to reduce it. (Product development teams work harder with short schedules.) Something’s got to change. Read the rest of this entry »
Design for Six Sigma and Six Sigma Are Not Even Cousins
There is no question that Six Sigma helps companies make money. So much so that everyone in the manufacturing community knows the five hallowed letters: DMAIC (Define, Measure, Analyze, Improve, Control). It’s straightforward and fully wrung-out. But that’s not the case for the wicked step sister Design for Six Sigma (DFSS). She’s fundamentally different and more complicated. To start, it’s an alphabet soup out there. Here are some of the letters: DMADV, DMADOV, IDOV, and DMEDI, and there are likely more. Does everyone know these letters and what they stand for? Not me. But here is the fundamental difference: with DMAIC the thing to be improved already exists and with DFSS the thing to be created does not. In essence, there is no formalized problem to solve. So what you say?
With DMAIC it’s all about reducing variation relative to the specification; with DFSS there is no specification. In fact, there is no product yet a process on which we can measure variation. First the product itself must be created and its functional performance must be defined over a range of parameters. Only then is manufacturing variation measured relative to the range functional parameters (DMAIC). But I got ahead of myself.
Before creating the thing that does not exist and make sure it meets the functional specification, some mind reading of customer needs is required, an even lesser defined thing. So, there is a round of reading customers minds followed by round of creating something that does not exist to satisfy the customer needs define in the mind reading sessions. Oh yea, then the tolerances must be defined so the product always functions the way it’s supposed to. All this before we turn the DMAIC crank.
My point with all this is to help set expectations when dealing with product design/DFSS. It is wrong to expect the predictability and standardization of DMAIC when doing product design/DFSS. It’s different. Product design/DFSS is not the same turn-the-crank kind of operation. That is not to say there is zero predictability and standard work or that predictability is not something to strive for. It’s just different. With product design the problems are unknown at the start and sometime even the fundamental physics are unknown. Please keep this in mind when your product development projects are late relative to hyper aggressive, non-work-content-based schedules or when new products don’t meet the arbitrary cost targets.
Improving Product Robustness 101
Improving product robustness is straightforward and difficult. Here’s how to do it.
Identify specific failure modes, prioritize them, and go after the biggest ones first. Failure modes can be identified through multiple sources. Warranty data is sometimes coded by failure mode (more precisely, symptom type), so start there. The number one failure mode in this type of data is typically “no problem found”, so be ready for it. Analysis of the actual products that come back is another good way. Returned product is routed to the appropriate engineer who analyzes it and enters the failure mode into a database. A formal design FMEA generates a list of prioritized failure modes through the risk priority number (RPN), where larger is more important. To do this, engineers are hauled into a room and a facilitator helps them come up with potential failure modes. One caution – the process can generate many failure modes, more than you can fix, so make the top five or ten go away and don’t argue the bottom fifty. It makes no sense to even talk about number eleven if you haven’t fixed the top ten. But the best way I have found to identify failure modes (problems) that are meaningful to the customer is to ask the technical services group for their top five things to fix. They will give you the right answer because they interact daily with customers who have broken product. They won’t expect you to listen to them (you never listened before), so surprise them by fixing one or two on their list. They will be grateful you listened (they’ll likely want to buy you coffee for the rest of your career) and your customers will notice.
Once failure modes are identified, define the physics of failure – why the product breaks. This is tough work and requires focused thought and analysis. If, when you break the product, it “looks like” the ones coming back from the field, you have defined the physics of failure. This is the same thing as replicating the problem in the lab. Once that’s defined, create an automated test rig or experimental setup that breaks the product in a way that captures the physics of failure. I call this test rig a robustness surrogate because it stands in for the actual failure mode seen in the field. The robustness surrogate should break the product as fast as possible while retaining the physics of failure so you can break it and fix it many times before product launch. The robustness surrogate should be designed to break the product within minutes, not hours or days – the faster the better.
To know if product robustness is improved, the baseline (or existing) design is broken on the robustness surrogate. The new design must survive longer on the robustness surrogate than the baseline design. The result is A/B data (baseline design/ new design) that is presented at the design review using a simple bar graph format which I call big-bar-little-bar. Keep improving robustness of the new design even if it outperforms the baseline design by a factor of ten – that’s not good enough for your customers.
Don’t stop improving robustness until you run out of time, and don’t stop if you meet the arbitrary MTBF specification. Customers like improved robustness, and in this case too much of a good thing is wonderful.
Using this method, I reduced warranty cost per unit by 75% over a five year period. It worked.
Improve Product Robustness at the Expense of Predicting It
In a previous post I defined the term brand-damaging threshold and said I’d talk about how to improve product robustness. So, here goes.
Every company is at a different stage in their formalized product robustness efforts, so it’s challenging to talk meaningfully to everyone. But there are two especially meaningful principles that have served me well through the years.
I had the privilege of working with Don Clausing – Total Quality Design, The House of Quality, Enhanced QFD, and Robust Quality. I vividly remember the conversation where Don shared one of his secrets. As we watched a robustness test run, Don, in his terse way, barked out a guiding principle of improving product robustness. He said:
“Improve robustness at the expense of predicting it.”
I asked Don what the hell he meant (he liked to make his students work for it), and after some prodding, he went on to explain why it’s so important. He said people spend far too much time running tests to predict robustness and then spend even more time calculating mean time between failures (MTBF). If that’s not enough, then they spend time arguing about MTBFs and the confidence intervals. He said companies should dedicate all their time and energy improving robustness. “That’s what matters to the customer,” he said. And then he continued with something like: “Predicting robustness is worse than a simple waste of time.” (He wasn’t that polite.) But I still didn’t get it. What’s the big deal about predicting robustness? Read the rest of this entry »
Lack of product robustness can damage your brand
There are many definitions of product robustness and just as many formally trained specialists willing to argue about them. I get confused by all that complexity, I don’t like to argue, and I am not a specialist, I am a generalist. I like simplicity so I use operational definitions every chance I get. Here’s one for product robustness:
A customer walks up to your product, turns it on, and it works without breaking or getting in its own way.
Bad product robustness is bad for your brand. Very bad. Customers do not like when they pay money for a product and it doesn’t work, especially when they rely on those products to make money for themselves. And they remember the experience in a visceral way.
You can’t fix bad product robustness with great marketing; you can’t fix it with spin selling; you can’t tell customers you fixed it when you didn’t (since they use your product, they know the truth); and you can’t hide it because customers talk (so do competitors). There is no quick fix – it takes tools, time, training, and new thinking to improve product robustness. And when you do manage to fix it, customers won’t believe you until the see it for themselves. They don’t want to get burned again.
No product is infinitely robust, nor should it be. It doesn’t make financial sense. The product would be infinitely expensive and would take an infinite amount time to develop. But how much robustness is enough? An easier, and possibly more important, question to answer is – how much is too little? Or, stated another way, what is the minimum level of product robustness?
The specialists won’t agree with my assertion that there is a minimum threshold for product robustness, but I don’t care. I think there is one. I call this minimum value the brand-damaging threshold. Here’s an operational definition of product robustness that’s below the brand-damaging threshold:
Customers don’t buy your product because they know it breaks or gets in its own way and they go out of their way to tell others about it.
It is difficult to know when customers don’t buy, never mind know why they don’t. But there are some tell-tale signs that product robustness is below the brand-damaging threshold. Here are a few.
The CEO takes enough direct calls about products that don’t work to feel obligated to send you a thoughtfully-crafted, four word email saying something like “Fix that @#&% thing!” Customers have to be really pissed off to call the CEO directly, so the situation is bad. It’s also bad for a reason that’s closer to home – the CEO sent the email to you.
You get a little sick to your stomach when sales increase. You know you should be happy, but you’re not. Deep down you know you’ll see many of those products again because they’ll be sent back by angry customers, in pieces.
The volume of returns is so significant you create a refurbishment department. Or you create a new group to scavenge the reusable stuff off the piles of returned product. Not good signs.
Your product’s lack of robustness is the headline message in your customers’ marketing literature.
Now that the brand-damaging threshold is defined, the next logical topic is how to improve product robustness so it’s above the threshold. But that’s for another post.