Today’s markets shift faster than ever yet companies structure themselves in ways that hinder their ability to respond to market shifts. Some of this has roots in what works for small companies. It gets magnified by a black and white reading of classic MBA materials.

When a company is just starting out, it is focused and targeted. Similarly, hiring is often targeted. As the company grows this view of “hiring the expert” continues. It gets magnified by the classic alignment of resources taught in business schools. While larger companies might hire a new college graduate, the new hire is placed in a product specific team where he is expect to become an expert in one area. This is the alignment of resources. The product area must control its engineering team lest the product line general manager lack the resources to meet his goals. I agree with this as far as it goes. Where I  disagree is with the black and white nature of this structure at most companies. Life is grey. What is needed is an engineering team structure which combines focus with flexibility. What we want to achieve is a very diverse set of objectives:

  • Product area expertise
  • Resource alignment with product line control
  • Continuity of effort and knowledge
  • Flexibility
  • Diverse skill set
  • Dynamic resourcing
  • Dissemination of corporate knowledge
  • Minimization of staffing & cultural disruptions

Classic engineering structures solve the first three items. The product line builds a team which becomes expert in the product line’s engineering area. The product line owns these resources. This provides  control so they can be assigned at the will of the product line manager (PLM). Additionally, assuming good employee retention, product line knowledge is retained. Successive products build on earlier engineering work. This is the classic engineering structure. It is simple and easy. In particular,  it is easy on the CEO. The CEO gets to defer resourcing of projects to the product line. This structure even solves some of the need for flexibility. The PLM is able to adjust his resources as he sees fit. This avoids excuses about the inability to control resources that would exist in a company with one, large central engineering organization and hence avoids a major headache for the CEO.

Now look at the last five items in the list. While there is flexibility in assigning the product line resources, a classic structure has little or no flexibility to move staff among the various engineering groups. This means a focused but narrow set of available engineering skills. Headcount only varies through hiring or firing of engineering staff. This leads to disruptions and a lack of continuity. During periods where the product line lacks ideas for new products (yes it happens), engineering staff is reduced. When times change, new people are brought in with the inevitable necessity to correct poor hiring decisions. This is disruptive and the engineering team is distracted from its primary job of creating new product. Finally, the product line becomes a silo. It gets isolated from other product lines. Mistakes in one product line get repeated in another. Little knowledge gets transferred between the different engineering groups. Corporate engineering conferences can help in this last area but it’s a fight. Knowledge transfer has to be forced rather than being a natural outcome of the engineering structure. This is one area where a central organization is superior.

In the past, some companies have tried to solve these issues by moving all engineering resources into one large, central organization. Doing this fixes many of the last issues but exacerbates the first four. In particular, the PLM’s feel they lack the ability to control their future. They are asked to perform and yet have no control over the engineering resources that are key to their success.

The correct solution is to remember that life is grey i.e. what is needed is a mix. Most resources should be aligned with the product lines. Each product line should have adequate staff to execute the product line’s normal engineering work load. In addition, there needs to be a central engineering team. This is best viewed as a “hit team” for the CEO. Resources are assigned to various product lines based on where the CEO wants to place overall corporate emphasis. Properly executed, this augmentation of the product line resources solves several issues. Resource flexibility is enhanced. Since the central team works in many areas, it develops a diverse skill set. Resourcing is dynamic with both product line and overall corporate flexibility. Knowledge gets disseminated as a natural result of the central team working with various product line engineering groups. Finally, staffing and cultural disruptions are minimized. When one area is entering a slack time, central resources are pulled and assigned elsewhere. Churning of engineers is reduced.

The structure I have just described works well. Well, it is more accurate for me to say it can work well if properly executed. Like many things, what appears simple is much more complicated in reality. Staffing the central group is different from staffing the product line team. The prime requirement is for an engineer who understands engineering. I don’t mean the “A” student who memorized his way through his classes but rather the student who has a deep feeling for engineering. This requires a different and usually deeper interview process. The engineers will be required to work in various areas rather than being specialized in one.  They will have to depend on their deep understanding to guide them when working in unfamiliar areas. Furthermore, personality screening must be done to make sure the prospective engineer will adapt well to being a central resource. Many engineers prefer the ownership that comes from being part of the product line. The central group is one where most of the hiring should be new college graduates. In this way the engineers aren’t specialized by years of working in one narrow area. A few experienced engineers will be needed to guide these new college grads and those engineers must have good leadership skills.

This central group has to be built. That brings up another problem. It goes through growth stages and management must be able to adjust to the stages and properly nurture them. Initially the group will be green. That means it will require close supervision and what amounts to micromanagement. Also, initial tasks should involve simple extensions of the work done by the main product line engineering groups. An example is a migration of a proven design to a next generation technology or a simple change to generate a differentiated product. This also helps alleviate the fear the product line groups will have that the central group is out to replace them. As unfounded as that fear is, it will be there. Pointing out that the central group is taking the boring jobs so the expert product line group can handle the cool new challenging designs will make things work more smoothly.

As time goes on the nature of the central group will change. The central group will get exposed to designs from each of the different product lines. That means the central group will be the one group that has exposure to the full set of corporate engineering knowledge. Eventually they will become an agent for cross pollination between the product line groups by bringing engineering knowledge and techniques form one group to another. The central group will also mature out of doing simple “follow on” project to the point where it takes on more challenging tasks. Eventually it will become an elite group taking on some of the most difficult projects the company has. At this point product line fears will reemerge and must be dealt with by management. There will be a fear that all of the good work will go to the central group. Management must reaffirm their faith in the product line groups and point out that the central group is not sized to take over all engineering and can only augment the product line groups. Additionally, management will have to do the difficult shift from micromanaging a green group to being the supportive enabler of a mature group.

There are other issues that must be dealt with if a company is to be successful with this structure. In particular, the CEO must be willing to be an active participant in the direction of the company. Resourcing of the central group will involve CEO direction. I have been surprised by how many CEO’s want to abdicate this responsibility. It is simpler for them to just dump the responsibility for the direction of the company on the PLM’s. Worse, as the central group gets more experienced, it will, if properly nurtured, develop into a resource that gets fought over by the product lines. Indeed, there will be an ongoing attempt at “land grabs” i.e. the PLM’s will explain to the CEO why they should control the central group. This will require a strong CEO who is willing to drive overall corporate direction and resist the pressure to break up the central group. It comes down to a CEO willing to do what is right for the shareholders rather than what makes his life easier.

What the central group is not is an extension of the CTO office solely devoted to new developments. That said, it can easily report into the CTO office. Furthermore, the flexibility of the group and its diverse skill set makes it the perfect tool to to be used for entry into new product areas. Consider the way most companies handle entry into a new product area. First, the company looks to hire a team or do an acquisition. An acquisition may be the right answer – sometimes. Often it represents a huge waste of shareholder money.  A failed startup is acquired at a high price. The internal engineers are sent the message that management lacks faith in them to enter the new area. If  a new group is built from scratch then the company looks to hire experts. What they usually get are the “B” level players. The “A” players are locked into their present companies with golden handcuffs. The big problem is one of time. Recruiting and building the new group takes time. If done too quickly, the result is a poor quality group and a failed entry into the new area. A better approach is to get one experienced person in the new product area and augment him with resources from the central team. The central team can start even before the experienced engineer is found and hired. The result is rapid entry using a quality team which can extend what has been previously done by other companies. This contrasts to the acquisition approach and from scratch team approach which often lead to a “me too” and mediocre product.

In summary, it takes a combined structure of engineering resources to maximize execution together with flexibility. This can be accomplished but requires the active participation of the CEO. Additionally, the CEO must be willing to resist “land grab” pressures that will develop. Properly done, tho structure enables a company to move rapidly into new areas, cross pollinate ideas within the company, reduce engineer churn and shift product emphasis rapidly.

First and foremost Apple sells a polished user experience. Apple sweats the details. From the moment you walk into the store the experience is polished and first rate. Unboxing your purchase continues the experience. Even Apple’s service group, AppleCare, is different. You get lots of attention from people who know what they are doing. Apple hardware has a lot of refinement. The OS feel is consistent and people consistently talk about Apple products as intuitive and easy to use.

I have written about convergence and transparency. These two trends play right into Apple’s strengths. Apple is selling more and more laptops because people have purchased iPhones. People who have purchased iPads are now buying iPhones. The release of OSX Lion moves the laptop closer to iOS. The iPhone and the iPad use the same OS. This means transparency of use. But, for the first time, I see Apple moving backwards. Their new policy requires that Apple receive 30% of any in-app purchase. I can see how Apple reached this point. Games would be offered for free in the Apple App Store. Once you started playing the game, you found out you had to do an in-app purchase to go beyond level 3. Apple saw this as a direct end run around their app store policies in order to avoid paying Apple their cut. Admittedly, at 30% that cut is big and hence companies, especially small ones, are highly motivated to avoid this form of app store “tax.” None of this is a big problem as long as we are talking about games. Things are different when it comes to magazines and books.

So far the best example of the move towards transparency has been the Kindle ecosystem. There are Kindle apps for just about every device. There are apps for Android, iPhone, iPad, Mac, and Windows. If you buy a book through any one app it is available on all of the others. Bookmarks are shared. You can read on your tablet, pick up on your phone and finish up on your laptop. In every case, when you move to a new device, the app knows where you left off on the old one. This is transparency of use in action. Now Apple is working to hinder that transparency.

Reading books is still a transparent experience. However, buying them now involves exiting the Kindle program and using a web browser to go to Amazon.com. You can’t even click a button in the Kindle app and have it open Safari using the appropriate URL. You can in the Mac Kindle app. What should really happen is that the Kindle store should be built into the Kindle app. I suspect it eventually will be on Android. It will never be on iOS devices. Apple’s 30% cut would change a money maker into a loss leader product. Not only is 30% too high, I see no reason Apple should get anything. The books aren’t being bought through Apple’s online store. Besides, it is anticompetitive. It gives Apple’s own iBooks a competitive pricing advantage. The problem is, iBooks isn’t as universal as Kindle. This small chink in Apple’s image is becoming a growing crack. Online forums have end users griping about it. This is a chance for Google to press Apple and change the image of Android vs. iOS.

Until now, Android has been an interesting phone OS beloved by techies for its openness and many features. Most consumers have viewed, and in fact still are viewing, Apple’s iOS as the more polished and bug free operating system for phones and tablets. Apple’s greed could change that. Android gets more and more polished day by day. If in app purchases become the norm for Android and the exception for iOS then consumers will see Android as the easier and more transparent operating system. Imagine the difference is Amazon makes Kindle apps have smooth integration with the Kindle store except for Kindle on Apple devices. As more people buy and read ebooks, this will push them towards Android instead of iOS. All you have to do is read this to see how Apple may be inadvertently causing apps to be less friendly. Android versions of the apps won’t be so limited.

Right now Apple’s new policy has done little other than make Apple richer and tick off some app writers. However, as Android keeps getting stronger, this policy might come to threaten Apple when consumers begin to find buying and reading ebooks and ezines easier and more transparent on Android than iOS.

Moogle Update

Posted: August 24, 2011 in Google, Motorola
Tags: , ,

Despite all of the niceties said by Samsung and others we now see the truth. Samsung doesn’t like Google becoming a competitor. The result is that Samsung is now supporting a Korean effort to develop their own phone OS. Check it out here. My advice to Google still stands. The should sell off Motorola Mobile’s cell phone business.

HP paid $1.2B for Palm. Now they are dumping that and more. I have been saying that the only ecosystems that will survive are Apple, Google (Android) and Microsoft. The carnage has started. WebOS was a good OS. That doesn’t matter. It was too late, too poorly marketed and never got traction. Now it is essentially dead. RIM will follow although not in the near future.

More shocking is the announcement that HP may exit the PC market. HP leads the PC market in market share. How can they possible be wanting to exit that market? To understand why HP could even be considering this you need to look a little deeper. The laptop market is very competitive. That translates to low margins for everyone except Apple. Only Apple has a customer base willing to consistently pay a premium for their laptop product. Additionally, HP’s market share has been falling. But… here is the main reason. The phone is becoming the dominant computing device. The laptop is rapidly becoming secondary. Desktops are already secondary devices. The only way to shore up laptops in a way that would maintain margins was to develop an ecosystems with laptops as part of that. WebOS was a poor attempt at that. With the failure of WebOS, HP laptops will have to compete as just another part of the Microsoft ecosystem. That’s OK now but it will be a position that gets worse each day. If you count tablets as part of mobile computing then Apple has already surpassed HP in market share. What HP is afraid of is being trapped in a market that is losing relevance, decreasing in size and so commoditized that there is little differentiation. All that will lead to little or no profit.

The big take away from this is that it is not an isolated event. It is a part of the convergence trend I have been discussing. There will be more Titanic changes to come and they will involve more than RIM.

By now most readers will be aware that Google is buying Motorola Mobility. I started to write about this when I first heard the news but I wanted to think about it and explore the implications and potential reasons. Time is up. Here are my thoughts.

The most straight forward reason is patent defense. When Google lost out to Microsoft and Apple in the bidding for the Nortel patent portfolio it left Google in a very bad position. Android violates several of the Nortel patents. Google launched an offensive claiming Apple and Microsoft were using patents, as opposed to compelling solutions, as a way to attack Google. We must remember that Google also bid for these patents and, had they won, would have probably used them against Microsoft and Apple. Furthermore, an offer to join with Microsoft and Apple in acquiring the patents was rebuked by Google. If the purchase of Motorola Mobility is indeed a defensive play then this is nothing more than another round of that old patent game “I’ll cross license mine if you will cross license yours.” Considering the large amounts of cash Google is sitting on, this might be a very sensible move.

Could there be more to the acquisition than patents? Google has made cell phones in the past when it was jump-starting Android. But, should they be a cell phone producer? In the PC space Apple has been a small closed ecosystem compared to the loose and very diversified Microsoft ecosystem. The result was a larger, cheaper and more diversified hardware and software ecosystem for Windows (Microsoft) compared to OSX (Apple). Recall that, at one time (Apple II), Apple dominated the desktop space. The diversity of the Microsoft based environment resulted in Apple becoming a niche player. Today, despite Apple’s early lead, there is a strong possibility that Android will be the Windows of the smartphone and tablet space. I see no reason for Google to try to “out Apple” Apple. Think of the strange relationship that is going to exist with companies like HTC and Samsung. In the recent past, market pressure pushed those companies towards Google. Apple was closed to them. Microsoft Windows Phone 7 was open but Nokia was clearly customer number one and in a special, preferred customer, position. Now Google is not just a supplier but a competitor. I think Microsoft is secretly happy about all of this. It makes their relationship with Nokia look tame by comparison.

Could this be herd instinct? Apple makes the iPhone. HP bought Palm. Microsoft is in bed with Nokia. RIM makes Blackberry. Perhaps Google fell victim to the “everyone else is doing it” syndrome. Somehow I doubt it. The people at Google are nothing if not sharp. Still, it has happened at this level before.

One possible reason for the acquisition might be to push NFC. NFC requires that very specific hardware be placed inside smartphones. The Motorola Mobility arm of Google could push this. However, I think NFC can be effectively pushed without making the phones themselves. I don’t buy this as a reason for the acquisition.

That brings me to one final reason for the purchase – set top boxes. I have discussed how the real goal is a very broad and unified ecosystem. The TV is a big part of that. Google could merge GoogleTV into the Motorola Mobility set top box units. As a competitor in the set top box space they might be in a good position to drive their ecosystem. I have argued before that consumers don’t like extra boxes and hence AppleTV and even external game boxes (PS3, Wii, Xbox) are interim solutions. The one external box that has some life left is the cable box.  Google could merge the cable box, GoogleTV and Android games into one piece of hardware. Moving between cable product, internet streams and applications could be made very unified and essentially transparent to the consumer.

Summary: This acquisition is all about the patent portfolio and using it as a counter to Apple and Microsoft. However, Google is left with a hardware business that competes with key customers.

My recommendation: If I was willing to tell Apple what to do then why not another multibillion dollar company that is highly profitable? So Google, here is what you should do. Sell off the mobile device arm of Motorola Mobility but keep set top boxes. Keep all of the patents and just license them to the entity acquiring the cell phone business. Finally, merge GoogleTV into the cable box and make GoogleTV fully compatible with Android games. Use your new found cable box presence to drive a broader ecosystem that is more unified than what consumers have now.

If you have followed my blog from its inception you know I feel the phone will become your primary computer. That feeling continues to grow stronger. The more difficult issue is discerning just what path this will take. I have mentioned before that companies can fail by jumping to the final solution and not realizing that change often progresses along a jagged path. My ultimate dream is a device that connects to the proper interface in a transparent fashion.

Right now we have WiFi and Bluetooth. Apple lets AirPlay ride on WiFi. This gives some support for video transfer from an iPad to a TV but requires an Apple TV device to make it happen. However, none of this handles the high bandwidth needed to make the user interface, and the the high definition video that goes with it, work without compromise. Enter standards groups to the rescue; unfortunately too many groups.

A first stab at this came with wireless USB. This is an ultra wideband technology that allows up to 480Mbs speed but only at a range of 3 meters. This is inadequate for 1080p 60 Hz video much less 3D and higher resolutions. This technology has gotten very little traction.

The early leader was WHDI (Wireless Home Digital Interface) consortium. However, the WirelessHD Consortium has an impressive list of supporters. Next comes the Wireless Gigabit Alliance or WiGig. They also have some big players behind them including some of the same people in WirelessHD. It’s all very confusing.

Recall what I said about major vs. minor trends. This has signs of being a major trend. But wait, it doesn’t “feel” that way. People aren’t scrambling to get wireless video hardware. That’s going to change. There is a lot in the works and it will take time to gel but it will gel.

Who am I betting on? Well, I’ll start with an interesting fact. Of particular interest here is the adoption of support for wireless DisplayPort by WiGig. Not mentioned on the WiGig website is an important name – Apple. Recall that Apple is the big force behind DisplayPort. A second force pushing WiGig is the movement by companies like Panasonic to take WiGig mobile. WIDI is mobile capable but has more challenges extending its speed and flexibility. Another related major announcement is the Qualcomm Atheros AR9004TB chip for WiGig. However, this looks suited for laptops and docking stations and not phones. It will compete with solutions for WirelessHD such as the SiI6320/SiI6310 WirelessHD® HRTX Chipset.

How does this play out? The Qualcomm chip shows the way to docking stations for tablets and phones. These may have some success but the need is for a more embedded solution. That will start with laptops which have the luxury of more board space and larger batteries. However, it will move into phones once the power issue is solved. This won’t be the end. So far I have been discussing wireless video. True transparency will require something more general. For that I expect something like wPCIe from Wilocity to allow full connectivity. Initially wPCIe will allow laptops to wirelessly dock with peripherals. Longer term, this too will migrate into the tablet and the phone. At that point your phone will wirelessly dock with external hard drives, displays, and pretty much anything else you would hook to a desktop.  wPCIe is based on the WiGig standard so it will be a quick extension to WiGig wireless video. That also means that range will be adequate to allow your phone or laptop to be several meters away from the other end of the wirless link.

Currently, none of this matches the speed of Thunderbolt but it may be close enough. WirelessHD has higher speeds already defined and I expect WiGig to follow. Expect WiGig to look a lot like wireless Thunderbolt. Thunderbolt is basically DisplayPort and PCI Express (PCIe). WiDig will also include DisplayPort and PCIe. For true speed freaks, a hard wired connection will always be the best. Thunderbolt will move to 100Gbs when the move is made from copper to fiber. By then WiGig and WirelessHD will just be matching copper connected Thunderbolt in performance.

There’s a lot more at play here that makes it difficult to predict the winner. WIDI works at lower frequencies and can connect through walls. WirelessHD and WiGig are strictly line of sight. However, some of the claims for future versions of WIDI are suspect since they involve very high data rates relative to available frequency bandwidth. WiGig has the ability to move from a WiFi connection to a WiGig connection in a transparent fashion. WIDI is mobile capable now since it rides on older WiFi technology. I am uncertain when a low power WiGig or WirelessHD chip will be available.

A friend sent me a link to an article on changes coming in microprocessors. The article is The Lifer: Why Your Core i7 Processor May Be Obsolete Sooner Than You Think. It got me thinking about writing this post not because the article has any great insight but because of the opposite. The article is too shallow.

One of the topics mentioned is specialized computing. This is nothing new. While it wasn’t the beginning, many people may remember the Intel 8087 floating point coprocessor that offloaded the 8086. Earlier there was the less well know 8031A. I have linked to a copy of the datasheet if you want to see how things used to be. The 8031A paired with the 8080 microprocessor. Interestingly, considering the two companies today, the 8031 and 8031A were licensed versions of AMD’s   AM9511 and AM9511A introduced in 1977. Today, we take it for granted that this floating point capability is built into the processors we use.

Throughout computing history the research agencies have driven the need for large, somewhat specialized computers. From the CDC 6600 (1964), to the Cray 1 (1976), to Nebulae (2010) floating point performance has driven a class of supercomputers designed for scientific and military research.  Originally these designs employed vector processors. Today, machines like the Nebulae use off the shelf graphics processors as general purpose computing engines (GPGPU). In particular, nVidia has started marketing to this area. The problem is that modern GPU’s are basically SIMD machines and bring along many of the limitations found with a SIMD architecture. Working with the limitations of SIMD and mitigating those limitations is a big topic with a large body of work so I won’t address it in depth here. For restricted problems such as graphics rendering it is a very effective approach. At the top end, the AMD 6990 graphics card contains two processor chips which together yield 3072 Stream Processors, 192 Texture Units, 128 Z/Stencil ROP Units, and 64 Color ROP Units. For graphics rendering this gives amazing performance. What it is not good at is general computing. In summary, specialized computing is nothing new and has been with us for a long time. Massively parallel specialized computing is here today.

Myslewski talks about large numbers of general purpose computing cores. We have made great progress utilizing four core and even eight core system. There are restricted problems such as design rule verification of large chip designs which are amenable to massively parallel systems. However, general purpose computing has trouble utilizing even four cores effectively. More interesting than the straight forward approach Myslewski mentions are approaches which reconsider the very nature of what a processor is. I have been thinking about this lately after watching a talk by Steve Teig of Tabula.

http://www.c-eda.org/IEEE-CEDA-DAC-061510/IEEE-CEDA-DAC-061510.html

Steve mentions Haskell as a language of choice. This is a transition that is needed and is fundamental. We currently force fit a one CPU ecosystem onto multiCPU processors. We patch language structures and manually work to make task division successful. In graphics this is somewhat straight forward. You tell the different cores “Care 1 you work on this area of the scene, core 2 you work over here, core 3 …” Except for specialized areas such as graphics, this model does not fit what we do today when we get beyond four cores. Right now we can, at a very simplistic level, say, “Core 1 you handle operating system commands, core 2 you run the program, core 3 you take care of the anti virus background tasks, core 4…” What is wrong here is the process and mindset itself. That’s why Steve mentions Haskell. The mental process I just outlined is forcing the code onto the processor. What is needed is a new paradigm of code as architecture. I am not talking about the Tensilica approach but something closer to the work discussed here. If you read through the various papers you will see a common theme related to the problem of limited FPGA size. The idea of time as a third dimension opens the door to a possible solution. What needs to be worked out is an interface that gets around the von Neumann memory bottleneck and allows continuous reconfiguration of the FPGA. Once that is achieved, arbitrarily large code can be executed with a three dimensional FPGA (X, Y, time) as the direct instantiation of the code. For an example of this type of FPGA check out Tabula. Be careful to not get lost in the hardware although that is a key component. The main advantage of the hardware is the ability to latch its state and rapidly reconfigure. More important that that functionality is compiling down into the FPGA in a way which allows a mapping of code to circuitry that continuously reconfigures as code is executed rather than execute, save state, load code, reconfigure, execute. Let me know what you think of the concept of code as architecture.

At one time I sat on an advisory panel that reviewed requests for venture capital. This was related to Georgia’s Yamacraw initiative. The panel was advisory only but included local Atlanta businessmen. It was a small panel and we had some great discussions. The state would only invest if other venture capital funds were already investing. Consequently the proposals had already survived initial vetting. Still, I was surprised at what I saw. I was discussing this with a friend and thought I would post a few of my observations.

Observation 1: Commitment or lack thereof.

Time and again I saw companies based on work down at Georgia Institute of Technology (GT). GT is a great school and there is great research going on there. The problem was that the professor was proudly listed as a founder and key player but when I asked if he had given up his tenured position to focus on the company the answer was always a resounding “No!” Let me get this straight. You want people to put millions into your company but you aren’t willing to work full time to make it a success. I’ll pass. I am sure there are lots of examples of successful companies done this way. I just don’t like the odds. I want to distinguish this from the case where a graduate student has invented a technology while working with a professor and the student is forming the company with the professor as a consultant. As long as the founder is full time, I’m good.

Observation 2: Serial entrepreneur who doesn’t take responsibility.

There was one guy who had started an earlier company. We heard how the failure of the earlier company was the CFO’s fault. This guy had been the CEO. It happened on his watch. At the minimum I expected to hear about painful lessons learned and future corrective actions. Instead it was all excuse making. I passed.

Observation 3: Big ego and insistence on retaining control.

This one isn’t from the Yamacraw Advisory Board. Rather it is a painful lesson from a personal investment. I thought I had done my homework. The technology was sound. The market seemed ready. There was diverse funding. However, when more funding was needed, the founder refused to give up control of the company. I learned the hard way that this is a big red flag as is a person putting his name on the company. The CEO would have been an excellent, and probably very wealthy, VP of Engineering. Instead he became a failed CEO. Unlike me, the smart money refused to invest unless the venture capital crowd gained control. Hey, it’s their money and they wanted a meaningful say in what went on. They were correct in wanting that. I was too naive to see the danger signs. If a CEO is afraid of having to convince a board of directors then he shouldn’t be CEO.

Observation 4: Willingness to work hard and generate a hard work culture.

Building a successful company is hard work. Just ask people who have done it. Many will tell you that they are glad they hadn’t known how hard it would be because they might never have done it. When looking at a potential investment I want to know that the entire team understands that they are in a race against time and money. That means hard work; lots of hard work. It means being careful with spending. Generating a hard work culture doesn’t mean generating an oppressive one. People should be excited that they are building a company. Their ownership in the company should generate adequate motivation. The work environment should be alive with energy. People should be working hard because they want to win. This can be difficult to judge but it can come through talking to the executive staff.

This is by no means a complete list. I have avoided the standard topics of market analysis and threat analysis. Most books on the subject cover that better than I can in a blog entry.

OK, blogging can be difficult. I just got back from a trip to San Jose so there has been nothing new for well over a week. The trip was very productive in many ways but the blog suffered. While there, I was discussing many things with several companies. Some will show up here. As a teaser, OLED and the necessity of RGBW technology, non von Neumann computing, and high speed memory and the need for serial I/O. This is all very techie stuff. So, today I’ll post something entirely different.

I am watching the news and the budget debate is very frustrating. There are claims and counter claims. Without taking sides, I found the first table in this Wikipedia article interesting. Don’t just look at who was president, although that is very interesting. It also shows which party controlled the House and Senate. Debt as a percentage of GDP is interesting. Mostly I am posting this to get you thinking and looking at facts rather than following whichever party rhetoric you are inclined to align with. Enjoy and think.

IP Reuse

Posted: July 16, 2011 in Management
Tags:

For the past several years, intellectual property (IP) reuse has been a major topic in many companies. The idea is that a large corporate database of IP will reduce design times and disperse knowledge throughout the company. Rarely do these efforts work. The list of reasons is huge. That doesn’t mean IP reuse can’t work. In certain situations it has had a lot of success. You just have to go into it with your eyes wide open and a bit of a skeptical mindset. The distinguishing characteristic that separates success stories from failures is motivation. If the person doing the work is the person receiving the benefits then it will work. If you are asking people to do work just because it’s the right thing to do then the system will fail. Systems that work follow the rule “He who benefits does the work.” This is best done by looking at the most common structure, which happens to be a failing one, and comparing it to two structures which work.

In the most common system an IP repository is put into place. This might be something as simple as a list of IP and who “owns” it all the way up to complex systems with checkout tracking and automatic notification of updates. The engineer is told to put useful IP into the system. Users of the IP explain all that is needed to make the IP useful. This usually involves long lists of documentation, simulation files and testbench files. I once asked an engineering manager what would be required for reuse within his group. He gave me a long list of items he insisted be present. Now it turned out that this same manager was sitting on a lot of IP that might have corporate-wide use. I contacted him and asked him to please submit it. Of course I asked for the supporting information this manager had said was the minimum set. His immediate reply was that I was nuts. No way was he going to submit all of that. It was too much work. I had hit the fundamental problem. There was no reward for doing the work. Like most organizations, the company rewarded this manager for getting product out the door. It didn’t reward getting IP into the system. The manager loved the idea of grabbing IP if it aided his design efforts but was uninterested in helping others. Sadly, he was correct in his assessment of the reward system. When proposed, IP reuse usually gets shown as a time saver. It is pointed out how much time can be saved by reuse. What isn’t pointed out is the wasted effort getting items into the repository when those items will never be reused. Also not pointed out is that a lot of reuse has usually been going on all ready. People who design related items have been talking to each other be it in the hallway or at lunch and have been sharing. The informal sharing has been productive and efficient albeit undocumented.

In certain companies the design process is such that IP reuse can be done in a natural and efficient way with aligned motivations. A good example is a company which designs a family of large ASICs all of which are different combinations of building blocks with large groups of related products being on the same process. These blocks might include an HDMI input or perhaps a X4 USB2 block. The architect requests the design of different blocks and projects are generated within a group devoted to block design. Placing the block into the repository is no longer a side item with zero recognition. Rather, it represents the culmination of a project. For this block design group, release to manufacturing has been replaced by entry into the repository. There is alignment of interests. Reuse of the blocks is assured since there are defined products for them to go into. There is little waste. Where this works it is a great thing. The problem is that it can only work for certain companies or certain subgroups within companies.

The last example I want to give is something that is ready to be implemented but needs to be done carefully. The key is to make entry into the repository a natural part of the design process requiring little extra effort on the part of the design engineer. If a block doesn’t get reused then there is little lost effort was since little extra time was spent. Engineers don’t rebel because they are just getting their engineering work done as they would if the repository didn’t exist. Where the heavy lifting comes in is on the design flow and engineering software support side. When a design block is started the engineer is asked to classify the block and put in a brief description. Everything else is automatic. The owning design engineer, overall project name, design technology in use, etc. are all noted automatically. A placeholder is put into the IP repository complete with an expected release time based on when the design project should be done. This solves the old problem of parallel development where engineers don’t have insight into what others are working on. Someone needing an HDMI I/O might see one under development that should be done well within his schedule constraints. When the block design is completed it is marked as provisionally done and can be checked out. Once the original design has been checked out completely it is shifted to a verified design status.

I’ll write more on IP reuse in the future and mention some specific software programs that can help. For now I want to summarize what to look at. First, do the company’s products share a similar architecture and process or is the company diversified across many different product types? The answer dramatically affects optimum IP reuse policies. Secondly, corporate culture must be considered. IP reuse requires alignment in culture across the company and a unified message from the top down. The key here is honesty by top management about itself. Remember that the goal is selling product to the customer and not fancy PowerPoint slides or internal metrics. Reuse is a very good thing if you pick the proper shade of gray i.e. an implementation that aids productivity rather than generating lots of work with little return. It can be done.

I have one final thought. IP reuse is different from sharing corporate knowledge. Be careful not to confuse the two. Each can aid the other but they are different.