Archive for August, 2011

First and foremost Apple sells a polished user experience. Apple sweats the details. From the moment you walk into the store the experience is polished and first rate. Unboxing your purchase continues the experience. Even Apple’s service group, AppleCare, is different. You get lots of attention from people who know what they are doing. Apple hardware has a lot of refinement. The OS feel is consistent and people consistently talk about Apple products as intuitive and easy to use.

I have written about convergence and transparency. These two trends play right into Apple’s strengths. Apple is selling more and more laptops because people have purchased iPhones. People who have purchased iPads are now buying iPhones. The release of OSX Lion moves the laptop closer to iOS. The iPhone and the iPad use the same OS. This means transparency of use. But, for the first time, I see Apple moving backwards. Their new policy requires that Apple receive 30% of any in-app purchase. I can see how Apple reached this point. Games would be offered for free in the Apple App Store. Once you started playing the game, you found out you had to do an in-app purchase to go beyond level 3. Apple saw this as a direct end run around their app store policies in order to avoid paying Apple their cut. Admittedly, at 30% that cut is big and hence companies, especially small ones, are highly motivated to avoid this form of app store “tax.” None of this is a big problem as long as we are talking about games. Things are different when it comes to magazines and books.

So far the best example of the move towards transparency has been the Kindle ecosystem. There are Kindle apps for just about every device. There are apps for Android, iPhone, iPad, Mac, and Windows. If you buy a book through any one app it is available on all of the others. Bookmarks are shared. You can read on your tablet, pick up on your phone and finish up on your laptop. In every case, when you move to a new device, the app knows where you left off on the old one. This is transparency of use in action. Now Apple is working to hinder that transparency.

Reading books is still a transparent experience. However, buying them now involves exiting the Kindle program and using a web browser to go to Amazon.com. You can’t even click a button in the Kindle app and have it open Safari using the appropriate URL. You can in the Mac Kindle app. What should really happen is that the Kindle store should be built into the Kindle app. I suspect it eventually will be on Android. It will never be on iOS devices. Apple’s 30% cut would change a money maker into a loss leader product. Not only is 30% too high, I see no reason Apple should get anything. The books aren’t being bought through Apple’s online store. Besides, it is anticompetitive. It gives Apple’s own iBooks a competitive pricing advantage. The problem is, iBooks isn’t as universal as Kindle. This small chink in Apple’s image is becoming a growing crack. Online forums have end users griping about it. This is a chance for Google to press Apple and change the image of Android vs. iOS.

Until now, Android has been an interesting phone OS beloved by techies for its openness and many features. Most consumers have viewed, and in fact still are viewing, Apple’s iOS as the more polished and bug free operating system for phones and tablets. Apple’s greed could change that. Android gets more and more polished day by day. If in app purchases become the norm for Android and the exception for iOS then consumers will see Android as the easier and more transparent operating system. Imagine the difference is Amazon makes Kindle apps have smooth integration with the Kindle store except for Kindle on Apple devices. As more people buy and read ebooks, this will push them towards Android instead of iOS. All you have to do is read this to see how Apple may be inadvertently causing apps to be less friendly. Android versions of the apps won’t be so limited.

Right now Apple’s new policy has done little other than make Apple richer and tick off some app writers. However, as Android keeps getting stronger, this policy might come to threaten Apple when consumers begin to find buying and reading ebooks and ezines easier and more transparent on Android than iOS.

Advertisement

Moogle Update

Posted: August 24, 2011 in Google, Motorola
Tags: , ,

Despite all of the niceties said by Samsung and others we now see the truth. Samsung doesn’t like Google becoming a competitor. The result is that Samsung is now supporting a Korean effort to develop their own phone OS. Check it out here. My advice to Google still stands. The should sell off Motorola Mobile’s cell phone business.

HP paid $1.2B for Palm. Now they are dumping that and more. I have been saying that the only ecosystems that will survive are Apple, Google (Android) and Microsoft. The carnage has started. WebOS was a good OS. That doesn’t matter. It was too late, too poorly marketed and never got traction. Now it is essentially dead. RIM will follow although not in the near future.

More shocking is the announcement that HP may exit the PC market. HP leads the PC market in market share. How can they possible be wanting to exit that market? To understand why HP could even be considering this you need to look a little deeper. The laptop market is very competitive. That translates to low margins for everyone except Apple. Only Apple has a customer base willing to consistently pay a premium for their laptop product. Additionally, HP’s market share has been falling. But… here is the main reason. The phone is becoming the dominant computing device. The laptop is rapidly becoming secondary. Desktops are already secondary devices. The only way to shore up laptops in a way that would maintain margins was to develop an ecosystems with laptops as part of that. WebOS was a poor attempt at that. With the failure of WebOS, HP laptops will have to compete as just another part of the Microsoft ecosystem. That’s OK now but it will be a position that gets worse each day. If you count tablets as part of mobile computing then Apple has already surpassed HP in market share. What HP is afraid of is being trapped in a market that is losing relevance, decreasing in size and so commoditized that there is little differentiation. All that will lead to little or no profit.

The big take away from this is that it is not an isolated event. It is a part of the convergence trend I have been discussing. There will be more Titanic changes to come and they will involve more than RIM.

By now most readers will be aware that Google is buying Motorola Mobility. I started to write about this when I first heard the news but I wanted to think about it and explore the implications and potential reasons. Time is up. Here are my thoughts.

The most straight forward reason is patent defense. When Google lost out to Microsoft and Apple in the bidding for the Nortel patent portfolio it left Google in a very bad position. Android violates several of the Nortel patents. Google launched an offensive claiming Apple and Microsoft were using patents, as opposed to compelling solutions, as a way to attack Google. We must remember that Google also bid for these patents and, had they won, would have probably used them against Microsoft and Apple. Furthermore, an offer to join with Microsoft and Apple in acquiring the patents was rebuked by Google. If the purchase of Motorola Mobility is indeed a defensive play then this is nothing more than another round of that old patent game “I’ll cross license mine if you will cross license yours.” Considering the large amounts of cash Google is sitting on, this might be a very sensible move.

Could there be more to the acquisition than patents? Google has made cell phones in the past when it was jump-starting Android. But, should they be a cell phone producer? In the PC space Apple has been a small closed ecosystem compared to the loose and very diversified Microsoft ecosystem. The result was a larger, cheaper and more diversified hardware and software ecosystem for Windows (Microsoft) compared to OSX (Apple). Recall that, at one time (Apple II), Apple dominated the desktop space. The diversity of the Microsoft based environment resulted in Apple becoming a niche player. Today, despite Apple’s early lead, there is a strong possibility that Android will be the Windows of the smartphone and tablet space. I see no reason for Google to try to “out Apple” Apple. Think of the strange relationship that is going to exist with companies like HTC and Samsung. In the recent past, market pressure pushed those companies towards Google. Apple was closed to them. Microsoft Windows Phone 7 was open but Nokia was clearly customer number one and in a special, preferred customer, position. Now Google is not just a supplier but a competitor. I think Microsoft is secretly happy about all of this. It makes their relationship with Nokia look tame by comparison.

Could this be herd instinct? Apple makes the iPhone. HP bought Palm. Microsoft is in bed with Nokia. RIM makes Blackberry. Perhaps Google fell victim to the “everyone else is doing it” syndrome. Somehow I doubt it. The people at Google are nothing if not sharp. Still, it has happened at this level before.

One possible reason for the acquisition might be to push NFC. NFC requires that very specific hardware be placed inside smartphones. The Motorola Mobility arm of Google could push this. However, I think NFC can be effectively pushed without making the phones themselves. I don’t buy this as a reason for the acquisition.

That brings me to one final reason for the purchase – set top boxes. I have discussed how the real goal is a very broad and unified ecosystem. The TV is a big part of that. Google could merge GoogleTV into the Motorola Mobility set top box units. As a competitor in the set top box space they might be in a good position to drive their ecosystem. I have argued before that consumers don’t like extra boxes and hence AppleTV and even external game boxes (PS3, Wii, Xbox) are interim solutions. The one external box that has some life left is the cable box.  Google could merge the cable box, GoogleTV and Android games into one piece of hardware. Moving between cable product, internet streams and applications could be made very unified and essentially transparent to the consumer.

Summary: This acquisition is all about the patent portfolio and using it as a counter to Apple and Microsoft. However, Google is left with a hardware business that competes with key customers.

My recommendation: If I was willing to tell Apple what to do then why not another multibillion dollar company that is highly profitable? So Google, here is what you should do. Sell off the mobile device arm of Motorola Mobility but keep set top boxes. Keep all of the patents and just license them to the entity acquiring the cell phone business. Finally, merge GoogleTV into the cable box and make GoogleTV fully compatible with Android games. Use your new found cable box presence to drive a broader ecosystem that is more unified than what consumers have now.

If you have followed my blog from its inception you know I feel the phone will become your primary computer. That feeling continues to grow stronger. The more difficult issue is discerning just what path this will take. I have mentioned before that companies can fail by jumping to the final solution and not realizing that change often progresses along a jagged path. My ultimate dream is a device that connects to the proper interface in a transparent fashion.

Right now we have WiFi and Bluetooth. Apple lets AirPlay ride on WiFi. This gives some support for video transfer from an iPad to a TV but requires an Apple TV device to make it happen. However, none of this handles the high bandwidth needed to make the user interface, and the the high definition video that goes with it, work without compromise. Enter standards groups to the rescue; unfortunately too many groups.

A first stab at this came with wireless USB. This is an ultra wideband technology that allows up to 480Mbs speed but only at a range of 3 meters. This is inadequate for 1080p 60 Hz video much less 3D and higher resolutions. This technology has gotten very little traction.

The early leader was WHDI (Wireless Home Digital Interface) consortium. However, the WirelessHD Consortium has an impressive list of supporters. Next comes the Wireless Gigabit Alliance or WiGig. They also have some big players behind them including some of the same people in WirelessHD. It’s all very confusing.

Recall what I said about major vs. minor trends. This has signs of being a major trend. But wait, it doesn’t “feel” that way. People aren’t scrambling to get wireless video hardware. That’s going to change. There is a lot in the works and it will take time to gel but it will gel.

Who am I betting on? Well, I’ll start with an interesting fact. Of particular interest here is the adoption of support for wireless DisplayPort by WiGig. Not mentioned on the WiGig website is an important name – Apple. Recall that Apple is the big force behind DisplayPort. A second force pushing WiGig is the movement by companies like Panasonic to take WiGig mobile. WIDI is mobile capable but has more challenges extending its speed and flexibility. Another related major announcement is the Qualcomm Atheros AR9004TB chip for WiGig. However, this looks suited for laptops and docking stations and not phones. It will compete with solutions for WirelessHD such as the SiI6320/SiI6310 WirelessHD® HRTX Chipset.

How does this play out? The Qualcomm chip shows the way to docking stations for tablets and phones. These may have some success but the need is for a more embedded solution. That will start with laptops which have the luxury of more board space and larger batteries. However, it will move into phones once the power issue is solved. This won’t be the end. So far I have been discussing wireless video. True transparency will require something more general. For that I expect something like wPCIe from Wilocity to allow full connectivity. Initially wPCIe will allow laptops to wirelessly dock with peripherals. Longer term, this too will migrate into the tablet and the phone. At that point your phone will wirelessly dock with external hard drives, displays, and pretty much anything else you would hook to a desktop.  wPCIe is based on the WiGig standard so it will be a quick extension to WiGig wireless video. That also means that range will be adequate to allow your phone or laptop to be several meters away from the other end of the wirless link.

Currently, none of this matches the speed of Thunderbolt but it may be close enough. WirelessHD has higher speeds already defined and I expect WiGig to follow. Expect WiGig to look a lot like wireless Thunderbolt. Thunderbolt is basically DisplayPort and PCI Express (PCIe). WiDig will also include DisplayPort and PCIe. For true speed freaks, a hard wired connection will always be the best. Thunderbolt will move to 100Gbs when the move is made from copper to fiber. By then WiGig and WirelessHD will just be matching copper connected Thunderbolt in performance.

There’s a lot more at play here that makes it difficult to predict the winner. WIDI works at lower frequencies and can connect through walls. WirelessHD and WiGig are strictly line of sight. However, some of the claims for future versions of WIDI are suspect since they involve very high data rates relative to available frequency bandwidth. WiGig has the ability to move from a WiFi connection to a WiGig connection in a transparent fashion. WIDI is mobile capable now since it rides on older WiFi technology. I am uncertain when a low power WiGig or WirelessHD chip will be available.

A friend sent me a link to an article on changes coming in microprocessors. The article is The Lifer: Why Your Core i7 Processor May Be Obsolete Sooner Than You Think. It got me thinking about writing this post not because the article has any great insight but because of the opposite. The article is too shallow.

One of the topics mentioned is specialized computing. This is nothing new. While it wasn’t the beginning, many people may remember the Intel 8087 floating point coprocessor that offloaded the 8086. Earlier there was the less well know 8031A. I have linked to a copy of the datasheet if you want to see how things used to be. The 8031A paired with the 8080 microprocessor. Interestingly, considering the two companies today, the 8031 and 8031A were licensed versions of AMD’s   AM9511 and AM9511A introduced in 1977. Today, we take it for granted that this floating point capability is built into the processors we use.

Throughout computing history the research agencies have driven the need for large, somewhat specialized computers. From the CDC 6600 (1964), to the Cray 1 (1976), to Nebulae (2010) floating point performance has driven a class of supercomputers designed for scientific and military research.  Originally these designs employed vector processors. Today, machines like the Nebulae use off the shelf graphics processors as general purpose computing engines (GPGPU). In particular, nVidia has started marketing to this area. The problem is that modern GPU’s are basically SIMD machines and bring along many of the limitations found with a SIMD architecture. Working with the limitations of SIMD and mitigating those limitations is a big topic with a large body of work so I won’t address it in depth here. For restricted problems such as graphics rendering it is a very effective approach. At the top end, the AMD 6990 graphics card contains two processor chips which together yield 3072 Stream Processors, 192 Texture Units, 128 Z/Stencil ROP Units, and 64 Color ROP Units. For graphics rendering this gives amazing performance. What it is not good at is general computing. In summary, specialized computing is nothing new and has been with us for a long time. Massively parallel specialized computing is here today.

Myslewski talks about large numbers of general purpose computing cores. We have made great progress utilizing four core and even eight core system. There are restricted problems such as design rule verification of large chip designs which are amenable to massively parallel systems. However, general purpose computing has trouble utilizing even four cores effectively. More interesting than the straight forward approach Myslewski mentions are approaches which reconsider the very nature of what a processor is. I have been thinking about this lately after watching a talk by Steve Teig of Tabula.

http://www.c-eda.org/IEEE-CEDA-DAC-061510/IEEE-CEDA-DAC-061510.html

Steve mentions Haskell as a language of choice. This is a transition that is needed and is fundamental. We currently force fit a one CPU ecosystem onto multiCPU processors. We patch language structures and manually work to make task division successful. In graphics this is somewhat straight forward. You tell the different cores “Care 1 you work on this area of the scene, core 2 you work over here, core 3 …” Except for specialized areas such as graphics, this model does not fit what we do today when we get beyond four cores. Right now we can, at a very simplistic level, say, “Core 1 you handle operating system commands, core 2 you run the program, core 3 you take care of the anti virus background tasks, core 4…” What is wrong here is the process and mindset itself. That’s why Steve mentions Haskell. The mental process I just outlined is forcing the code onto the processor. What is needed is a new paradigm of code as architecture. I am not talking about the Tensilica approach but something closer to the work discussed here. If you read through the various papers you will see a common theme related to the problem of limited FPGA size. The idea of time as a third dimension opens the door to a possible solution. What needs to be worked out is an interface that gets around the von Neumann memory bottleneck and allows continuous reconfiguration of the FPGA. Once that is achieved, arbitrarily large code can be executed with a three dimensional FPGA (X, Y, time) as the direct instantiation of the code. For an example of this type of FPGA check out Tabula. Be careful to not get lost in the hardware although that is a key component. The main advantage of the hardware is the ability to latch its state and rapidly reconfigure. More important that that functionality is compiling down into the FPGA in a way which allows a mapping of code to circuitry that continuously reconfigures as code is executed rather than execute, save state, load code, reconfigure, execute. Let me know what you think of the concept of code as architecture.

At one time I sat on an advisory panel that reviewed requests for venture capital. This was related to Georgia’s Yamacraw initiative. The panel was advisory only but included local Atlanta businessmen. It was a small panel and we had some great discussions. The state would only invest if other venture capital funds were already investing. Consequently the proposals had already survived initial vetting. Still, I was surprised at what I saw. I was discussing this with a friend and thought I would post a few of my observations.

Observation 1: Commitment or lack thereof.

Time and again I saw companies based on work down at Georgia Institute of Technology (GT). GT is a great school and there is great research going on there. The problem was that the professor was proudly listed as a founder and key player but when I asked if he had given up his tenured position to focus on the company the answer was always a resounding “No!” Let me get this straight. You want people to put millions into your company but you aren’t willing to work full time to make it a success. I’ll pass. I am sure there are lots of examples of successful companies done this way. I just don’t like the odds. I want to distinguish this from the case where a graduate student has invented a technology while working with a professor and the student is forming the company with the professor as a consultant. As long as the founder is full time, I’m good.

Observation 2: Serial entrepreneur who doesn’t take responsibility.

There was one guy who had started an earlier company. We heard how the failure of the earlier company was the CFO’s fault. This guy had been the CEO. It happened on his watch. At the minimum I expected to hear about painful lessons learned and future corrective actions. Instead it was all excuse making. I passed.

Observation 3: Big ego and insistence on retaining control.

This one isn’t from the Yamacraw Advisory Board. Rather it is a painful lesson from a personal investment. I thought I had done my homework. The technology was sound. The market seemed ready. There was diverse funding. However, when more funding was needed, the founder refused to give up control of the company. I learned the hard way that this is a big red flag as is a person putting his name on the company. The CEO would have been an excellent, and probably very wealthy, VP of Engineering. Instead he became a failed CEO. Unlike me, the smart money refused to invest unless the venture capital crowd gained control. Hey, it’s their money and they wanted a meaningful say in what went on. They were correct in wanting that. I was too naive to see the danger signs. If a CEO is afraid of having to convince a board of directors then he shouldn’t be CEO.

Observation 4: Willingness to work hard and generate a hard work culture.

Building a successful company is hard work. Just ask people who have done it. Many will tell you that they are glad they hadn’t known how hard it would be because they might never have done it. When looking at a potential investment I want to know that the entire team understands that they are in a race against time and money. That means hard work; lots of hard work. It means being careful with spending. Generating a hard work culture doesn’t mean generating an oppressive one. People should be excited that they are building a company. Their ownership in the company should generate adequate motivation. The work environment should be alive with energy. People should be working hard because they want to win. This can be difficult to judge but it can come through talking to the executive staff.

This is by no means a complete list. I have avoided the standard topics of market analysis and threat analysis. Most books on the subject cover that better than I can in a blog entry.