Posts Tagged ‘Apple’

First and foremost Apple sells a polished user experience. Apple sweats the details. From the moment you walk into the store the experience is polished and first rate. Unboxing your purchase continues the experience. Even Apple’s service group, AppleCare, is different. You get lots of attention from people who know what they are doing. Apple hardware has a lot of refinement. The OS feel is consistent and people consistently talk about Apple products as intuitive and easy to use.

I have written about convergence and transparency. These two trends play right into Apple’s strengths. Apple is selling more and more laptops because people have purchased iPhones. People who have purchased iPads are now buying iPhones. The release of OSX Lion moves the laptop closer to iOS. The iPhone and the iPad use the same OS. This means transparency of use. But, for the first time, I see Apple moving backwards. Their new policy requires that Apple receive 30% of any in-app purchase. I can see how Apple reached this point. Games would be offered for free in the Apple App Store. Once you started playing the game, you found out you had to do an in-app purchase to go beyond level 3. Apple saw this as a direct end run around their app store policies in order to avoid paying Apple their cut. Admittedly, at 30% that cut is big and hence companies, especially small ones, are highly motivated to avoid this form of app store “tax.” None of this is a big problem as long as we are talking about games. Things are different when it comes to magazines and books.

So far the best example of the move towards transparency has been the Kindle ecosystem. There are Kindle apps for just about every device. There are apps for Android, iPhone, iPad, Mac, and Windows. If you buy a book through any one app it is available on all of the others. Bookmarks are shared. You can read on your tablet, pick up on your phone and finish up on your laptop. In every case, when you move to a new device, the app knows where you left off on the old one. This is transparency of use in action. Now Apple is working to hinder that transparency.

Reading books is still a transparent experience. However, buying them now involves exiting the Kindle program and using a web browser to go to Amazon.com. You can’t even click a button in the Kindle app and have it open Safari using the appropriate URL. You can in the Mac Kindle app. What should really happen is that the Kindle store should be built into the Kindle app. I suspect it eventually will be on Android. It will never be on iOS devices. Apple’s 30% cut would change a money maker into a loss leader product. Not only is 30% too high, I see no reason Apple should get anything. The books aren’t being bought through Apple’s online store. Besides, it is anticompetitive. It gives Apple’s own iBooks a competitive pricing advantage. The problem is, iBooks isn’t as universal as Kindle. This small chink in Apple’s image is becoming a growing crack. Online forums have end users griping about it. This is a chance for Google to press Apple and change the image of Android vs. iOS.

Until now, Android has been an interesting phone OS beloved by techies for its openness and many features. Most consumers have viewed, and in fact still are viewing, Apple’s iOS as the more polished and bug free operating system for phones and tablets. Apple’s greed could change that. Android gets more and more polished day by day. If in app purchases become the norm for Android and the exception for iOS then consumers will see Android as the easier and more transparent operating system. Imagine the difference is Amazon makes Kindle apps have smooth integration with the Kindle store except for Kindle on Apple devices. As more people buy and read ebooks, this will push them towards Android instead of iOS. All you have to do is read this to see how Apple may be inadvertently causing apps to be less friendly. Android versions of the apps won’t be so limited.

Right now Apple’s new policy has done little other than make Apple richer and tick off some app writers. However, as Android keeps getting stronger, this policy might come to threaten Apple when consumers begin to find buying and reading ebooks and ezines easier and more transparent on Android than iOS.

HP paid $1.2B for Palm. Now they are dumping that and more. I have been saying that the only ecosystems that will survive are Apple, Google (Android) and Microsoft. The carnage has started. WebOS was a good OS. That doesn’t matter. It was too late, too poorly marketed and never got traction. Now it is essentially dead. RIM will follow although not in the near future.

More shocking is the announcement that HP may exit the PC market. HP leads the PC market in market share. How can they possible be wanting to exit that market? To understand why HP could even be considering this you need to look a little deeper. The laptop market is very competitive. That translates to low margins for everyone except Apple. Only Apple has a customer base willing to consistently pay a premium for their laptop product. Additionally, HP’s market share has been falling. But… here is the main reason. The phone is becoming the dominant computing device. The laptop is rapidly becoming secondary. Desktops are already secondary devices. The only way to shore up laptops in a way that would maintain margins was to develop an ecosystems with laptops as part of that. WebOS was a poor attempt at that. With the failure of WebOS, HP laptops will have to compete as just another part of the Microsoft ecosystem. That’s OK now but it will be a position that gets worse each day. If you count tablets as part of mobile computing then Apple has already surpassed HP in market share. What HP is afraid of is being trapped in a market that is losing relevance, decreasing in size and so commoditized that there is little differentiation. All that will lead to little or no profit.

The big take away from this is that it is not an isolated event. It is a part of the convergence trend I have been discussing. There will be more Titanic changes to come and they will involve more than RIM.

By now most readers will be aware that Google is buying Motorola Mobility. I started to write about this when I first heard the news but I wanted to think about it and explore the implications and potential reasons. Time is up. Here are my thoughts.

The most straight forward reason is patent defense. When Google lost out to Microsoft and Apple in the bidding for the Nortel patent portfolio it left Google in a very bad position. Android violates several of the Nortel patents. Google launched an offensive claiming Apple and Microsoft were using patents, as opposed to compelling solutions, as a way to attack Google. We must remember that Google also bid for these patents and, had they won, would have probably used them against Microsoft and Apple. Furthermore, an offer to join with Microsoft and Apple in acquiring the patents was rebuked by Google. If the purchase of Motorola Mobility is indeed a defensive play then this is nothing more than another round of that old patent game “I’ll cross license mine if you will cross license yours.” Considering the large amounts of cash Google is sitting on, this might be a very sensible move.

Could there be more to the acquisition than patents? Google has made cell phones in the past when it was jump-starting Android. But, should they be a cell phone producer? In the PC space Apple has been a small closed ecosystem compared to the loose and very diversified Microsoft ecosystem. The result was a larger, cheaper and more diversified hardware and software ecosystem for Windows (Microsoft) compared to OSX (Apple). Recall that, at one time (Apple II), Apple dominated the desktop space. The diversity of the Microsoft based environment resulted in Apple becoming a niche player. Today, despite Apple’s early lead, there is a strong possibility that Android will be the Windows of the smartphone and tablet space. I see no reason for Google to try to “out Apple” Apple. Think of the strange relationship that is going to exist with companies like HTC and Samsung. In the recent past, market pressure pushed those companies towards Google. Apple was closed to them. Microsoft Windows Phone 7 was open but Nokia was clearly customer number one and in a special, preferred customer, position. Now Google is not just a supplier but a competitor. I think Microsoft is secretly happy about all of this. It makes their relationship with Nokia look tame by comparison.

Could this be herd instinct? Apple makes the iPhone. HP bought Palm. Microsoft is in bed with Nokia. RIM makes Blackberry. Perhaps Google fell victim to the “everyone else is doing it” syndrome. Somehow I doubt it. The people at Google are nothing if not sharp. Still, it has happened at this level before.

One possible reason for the acquisition might be to push NFC. NFC requires that very specific hardware be placed inside smartphones. The Motorola Mobility arm of Google could push this. However, I think NFC can be effectively pushed without making the phones themselves. I don’t buy this as a reason for the acquisition.

That brings me to one final reason for the purchase – set top boxes. I have discussed how the real goal is a very broad and unified ecosystem. The TV is a big part of that. Google could merge GoogleTV into the Motorola Mobility set top box units. As a competitor in the set top box space they might be in a good position to drive their ecosystem. I have argued before that consumers don’t like extra boxes and hence AppleTV and even external game boxes (PS3, Wii, Xbox) are interim solutions. The one external box that has some life left is the cable box.  Google could merge the cable box, GoogleTV and Android games into one piece of hardware. Moving between cable product, internet streams and applications could be made very unified and essentially transparent to the consumer.

Summary: This acquisition is all about the patent portfolio and using it as a counter to Apple and Microsoft. However, Google is left with a hardware business that competes with key customers.

My recommendation: If I was willing to tell Apple what to do then why not another multibillion dollar company that is highly profitable? So Google, here is what you should do. Sell off the mobile device arm of Motorola Mobility but keep set top boxes. Keep all of the patents and just license them to the entity acquiring the cell phone business. Finally, merge GoogleTV into the cable box and make GoogleTV fully compatible with Android games. Use your new found cable box presence to drive a broader ecosystem that is more unified than what consumers have now.

If you have followed my blog from its inception you know I feel the phone will become your primary computer. That feeling continues to grow stronger. The more difficult issue is discerning just what path this will take. I have mentioned before that companies can fail by jumping to the final solution and not realizing that change often progresses along a jagged path. My ultimate dream is a device that connects to the proper interface in a transparent fashion.

Right now we have WiFi and Bluetooth. Apple lets AirPlay ride on WiFi. This gives some support for video transfer from an iPad to a TV but requires an Apple TV device to make it happen. However, none of this handles the high bandwidth needed to make the user interface, and the the high definition video that goes with it, work without compromise. Enter standards groups to the rescue; unfortunately too many groups.

A first stab at this came with wireless USB. This is an ultra wideband technology that allows up to 480Mbs speed but only at a range of 3 meters. This is inadequate for 1080p 60 Hz video much less 3D and higher resolutions. This technology has gotten very little traction.

The early leader was WHDI (Wireless Home Digital Interface) consortium. However, the WirelessHD Consortium has an impressive list of supporters. Next comes the Wireless Gigabit Alliance or WiGig. They also have some big players behind them including some of the same people in WirelessHD. It’s all very confusing.

Recall what I said about major vs. minor trends. This has signs of being a major trend. But wait, it doesn’t “feel” that way. People aren’t scrambling to get wireless video hardware. That’s going to change. There is a lot in the works and it will take time to gel but it will gel.

Who am I betting on? Well, I’ll start with an interesting fact. Of particular interest here is the adoption of support for wireless DisplayPort by WiGig. Not mentioned on the WiGig website is an important name – Apple. Recall that Apple is the big force behind DisplayPort. A second force pushing WiGig is the movement by companies like Panasonic to take WiGig mobile. WIDI is mobile capable but has more challenges extending its speed and flexibility. Another related major announcement is the Qualcomm Atheros AR9004TB chip for WiGig. However, this looks suited for laptops and docking stations and not phones. It will compete with solutions for WirelessHD such as the SiI6320/SiI6310 WirelessHD® HRTX Chipset.

How does this play out? The Qualcomm chip shows the way to docking stations for tablets and phones. These may have some success but the need is for a more embedded solution. That will start with laptops which have the luxury of more board space and larger batteries. However, it will move into phones once the power issue is solved. This won’t be the end. So far I have been discussing wireless video. True transparency will require something more general. For that I expect something like wPCIe from Wilocity to allow full connectivity. Initially wPCIe will allow laptops to wirelessly dock with peripherals. Longer term, this too will migrate into the tablet and the phone. At that point your phone will wirelessly dock with external hard drives, displays, and pretty much anything else you would hook to a desktop.  wPCIe is based on the WiGig standard so it will be a quick extension to WiGig wireless video. That also means that range will be adequate to allow your phone or laptop to be several meters away from the other end of the wirless link.

Currently, none of this matches the speed of Thunderbolt but it may be close enough. WirelessHD has higher speeds already defined and I expect WiGig to follow. Expect WiGig to look a lot like wireless Thunderbolt. Thunderbolt is basically DisplayPort and PCI Express (PCIe). WiDig will also include DisplayPort and PCIe. For true speed freaks, a hard wired connection will always be the best. Thunderbolt will move to 100Gbs when the move is made from copper to fiber. By then WiGig and WirelessHD will just be matching copper connected Thunderbolt in performance.

There’s a lot more at play here that makes it difficult to predict the winner. WIDI works at lower frequencies and can connect through walls. WirelessHD and WiGig are strictly line of sight. However, some of the claims for future versions of WIDI are suspect since they involve very high data rates relative to available frequency bandwidth. WiGig has the ability to move from a WiFi connection to a WiGig connection in a transparent fashion. WIDI is mobile capable now since it rides on older WiFi technology. I am uncertain when a low power WiGig or WirelessHD chip will be available.

Cliches exist because they contain truth in an easy to digest form. There’s an old saying among engineers. “Anyone can build a bridge. It takes a good engineer to do it on time and under budget.”  That one holds the essence of why I consider good engineering more difficult to accomplish than good science. My formal training was as a scientist. I have been around scientific research in both the theoretical and experimental areas and I certainly appreciate the difficulties involved. However, it is the imposition of schedule and budget into engineering that makes it even more difficult than good science. Budget doesn’t just apply to the resources involved in the creation of the item but also involves the cost of manufacture. Great engineering means understanding “Just good enough.” Like many topics in this blog the concept of “Just good enough” is much broader and more important than many people think. It is related to the concept of quality. In his book Quality is Free, Philip Crosby defines quality as “conformance to requirements.” Great engineering meets the customer’s needs in the best manner. Best, in most cases, means finding a solution the customer can afford. For this reason designing a mid sized sedan like the Honda Accord is much more difficult than designing something like a Ferrari Italia. The Accord is in a much more competitive space and has tremendous budget constraints. If you want to upgrade the audio system then you have to find cost savings elsewhere. Many thousands of components have characteristics that must be traded off in order to meet the target price-point. The Ferrari design starts by asking “What’s best?” Just for fun, when it comes to the Accord, you get to layer on tougher customer expectations. The Accord isn’t a showpiece. It is a day-to-day working automobile and must perform perfectly for many years with few service needs. The Ferrari is expected to require some pampering. Even several year old Ferraris usually have just a few thousand miles on them. The Accord is a much tougher design challenge.

One engineer I admire is Steve Wozniak. If you look at the Apple II, the computer that made Apple a real company, you find many examples of awesome engineering. Again and again features are included and performance is achieved with elegant rather than brute force design. The result was a great combination of features at a reasonable price for its day. To highlight what I mean by “just good enough” I am going to single out just one of the many elegant design choices in the Apple II; but first I need to set the stage.

The personal computing era was kicked off in 1975 with the January issue of Popular Electronics. The cover article was on the construction of a computer kit called the MITS Altair 8800. With it came the introduction of the S100 bus. The Altair 8800 was a frame style design where cards were added to increase functionality. While many functions such as main memory have moved to the motherboard, we retain this expansion concept today although the S100 bus has mostly moved into history.

The Altair 8800 was copied by many companies and expanded upon. The S100 bus became an industry standard expansion bus. Lots of companies made cards for the S100 bus. Because of this a lot of computers placed only the basics on the motherboard in an effort to control price. There are problems with this approach. Since there was no game controller (joystick, paddle, buttons) functionality included in the Altair, there was no standardized game interface. I once looked at the cost of adding joysticks to an S100 based computer. The card alone was several hundred dollars. The approach involved expensive analog to digital converters (ADCs). The result was that only keyboard based games evolved for the S100 based machines.

During this time, games like Pong and Breakout were popular. It made sense to bring them to personal computers but they required interactive game controllers i.e. paddles or joysticks. A keyboard used as a controller lacked the same smooth interactivity. Using the keyboard for games was a compromise aimed at satisfying the engineers and accountants and not the customers but it was a compromise most computer manufacturers had adopted. Enter Apple and a few others. In 1977 Apple introduced the Apple II. It came with game paddles along with games like Breakout. To accomplish this in a cost effective manner, Wozniak pushed most of the design into software. Since he had designed Breakout in hardware for Atari, this was a big change in mindset. Great engineers adopt what is best as opposed to just reworking what they did in the past. Simplifying hardware and pushing complexity into software would turn out to be a very important trend. Here was that trend at a very early stage. Look at the schematic below.

This is part of the schematic of the Apple II included in the Apple II Reference Manual dated January 1978. What looks like a 553 integrated circuit (H13) is actually a 558. This is a quad version of the venerable 555 timer chip. The 558 is used to generate four paddle, or two joystick, inputs. Each paddle is just a variable resistor. Hooked into the 558, the resistance of the paddle controller determines the oscillation frequency of a simple RC oscillator. A loop in the code keeps reading the oscillator. The microprocessor can only read a 1 or a 0. If the voltage is above a certain level the microprocessor sees a 1. Below that it sees a 0. The Apple II loops while looking at the game paddle input. By looking at the pattern, for example 111000111000111000, it can determine the frequency of oscillation. This is then related to a game paddle position and the screen paddle is moved to the appropriate screen position. The beauty of this is that the paddle controller doesn’t have to be super linear. The paddles just need to be consistent i.e. all paddles need to act the same way. Nonlinearities can be corrected in software. To the user, using visual feedback as he looks at the screen while turning the paddle, this is all “just good enough.” It is also a high quality solution since it meets the user’s expectations and the requirements for playing games like Breakout. Including games and controllers gave the Apple II great consumer appeal and was a big part of its success and with it the success of Apple Computer.

Today we often see companies just iterating on a theme. These are the so-so companies. Great companies sit back, look at the bigger picture and think about possibilities. Rather than layering expensive, iterative solutions on each other, the great companies rethink the approach and create solutions that are cost effective while meeting user requirements. Exceptional companies go beyond this and create solutions to user requirements that the user didn’t know he had. That, however, is a topic for another post.

I’m back home and connected. Yeah! My kids are happy since World of Warcraft now works well. I’m trying to catch up and realized I haven’t posted in several days. Next week won’t be any better since I will be heading to Houston for a behind the scenes tour of Mission Control. I hope that trip is as much fun as I expect it will be.

Now to the techie stuff. I was flying today and the conversation turned to how things should work vs. how they really work. Of course the initial topic was about flying. I was working through approach procedures using a new autopilot. I fly a Cirrus SR22 equipped with Avidyne R9 avionics. Recently the autopilot was upgraded from the STEC 55X to the Avidyne DFC-100. This is a big upgrade. The STEC understood rate of turn (from a turn coordinator), altitude (air pressure sensor), course error (from Horizontal Situation Indicator), and GPS course. The new autopilot receives input from the GPS, Flight Management System and the Air Data Attitude Heading Reference System. In other words it knows just about everything about the airplane and its condition. It even knows flap position and engine power. The end result is a vastly superior autopilot. Sequencing is automatic (most times – see below). You can put in a flight profile and the plane will fly it including climbs and descents. The operation is very intuitive and a great example of intelligent user interface design. If you are climbing at a fixed IAS (Indicated AirSpeed) and set up to lock onto a fixed altitude the IAS button is green to show it is active and the ALT button is blue to show it is enabled but not locked. When you get to the desired altitude the ALT light blinks green and then goes steady green when locked onto the desired altitude. I could go on and on about how great this is and if you have questions just ask.

Now to more specifics about interface design. When you use the DFC-100 autopilot to fly an instrument landing system, ILS, approach, it is very automatic. If you punch VNAV, vertical navigation, you can  have the autopilot fly the entire procedure including the appropriate altitudes. When the radio signal of the ILS is received and verified correct (all automatic) the system shifts to using the electronic ILS pathway to the runway. So far everything has been very automatic. If you exit the clouds and see the runway you disconnect the autopilot and land. The problem comes when the clouds are too low to see the runway even when you are close and down low. This is a very dangerous time. At the critical point the plane is 200′ above the ground and there is little margin for error. If you don’t see the ground you execute the missed approach. This is where the great user interface breaks down. If you do nothing the autopilot will fly the plane into the ground. In order to have it fly the missed approach the following must happen. After the final approach fix, but only after, you must press a button labeled Enable Missed Approach. At the decision height when you are 200′ above the ground you must either disconnect the autopilot and start the missed approach procedure manually or shift from ILS to FMS as the navigation source and press the VNAV button. I can hear people, including pilots, asking me what the big deal is. The big deal is that this is when you really want the automatic systems looking over your shoulder and helping out. If you forget to shift from ILS to FMS the plane will want to fly into the ground. That’s a very bad thing. The system is still great. Even at this moment it is much better than the old system. I am not saying I would want to go back. I am saying it could be better and that this operation doesn’t fit with how seamless the autopilot’s operation usually is. What the system should do is automatically arm the missed approach. I see no reason for this to be a required manual operation with the potential to be forgotten. The pilot should select the decision height at which the missed approach will begin to be executed. When that point is reach, if the autopilot has not been disconnected, the autopilot should start flying the missed approach including VNAV functionality. That includes shifting the navigation source from ILS to FMS automatically.  The result would be increased safety since the system wouldn’t be requiring command input from the pilot at a critical moment.

The discussion above relates to what I have been covering in this blog. As computing systems improve and move into every area of our lives, issues like the one above will pop up. Everything about the DFC-100 is vastly superior to the old STEC. The issue is consistency of use. As our computing systems get better and better user interfaces, minor inconsistencies will appear to us as big annoyances. Look at the iPad. If you think of it as an eBook reader that lets you view mail and surf the web it is an awesome device. If you look at it as a fun device with simple apps and games it is awesome. As soon as you want it to be your main computer, things like the lack of a user accessible directory structure become big. Compared to the old Newton or the PDA, the iPad and the iPhone are major advances. However, with this new capability comes raised expectations. Developers don’t get to do great things and then sit back. As soon as users get comfortable with the new, next great thing they begin to find annoyances. One of Apple’s strengths has been minimizing these annoyances but even on the best devices they are there. Consistency of user experience is a big deal. Getting there is tough. My point is that small details matter. How the icons look, how smooth the scrolling is, the animation when actions are taken are all small things that matter. One of the reasons for the success of the iPad and iPhone has been this consistency and sweating the details when it comes to the user interface. As we merge devices and functions in the post PC world it will be critical that these disruptions, the non-transparent use scenarios be identified and fixed.

I thought about making the title of this post “I’m Right – They’re Wrong.” While I like the cloud for data everywhere and for syncing of data, I don’t believe in data ONLY in the cloud. There has been a lot of press around putting everything in the cloud. The Chromebook is one attempt at this. On the surface, my techie side gets excited. I hear cheap, long battery life, one data set and a unified experience across devices. The major thing I hear is low upkeep. Someone else does most of the application updates and makes sure things work. This last part, however, sounded hauntingly familiar. Then it hit me. This was the promise of thin clients. A long time ago in a different computing world, thin clients were going to save companies lots of money. The clients themselves would be cheaper. Falling PC prices killed that as a major selling point. The second thing was ease and consistency of software maintenance. The problem was that the world went mobile. People couldn’t afford to lose software access when they weren’t on the corporate network. In the end thin clients failed. Fast forward to today. The same issues apply to the Chromebook. Why get a Chromebook when a netbook can do so much more? Then there is the issue of connectivity. What happens when there isn’t a WiFi hotspot around? Are you thinking 3/4G? Think again. Look at today’s data plans and their capped data. Most people can’t afford to have everything they type, every song they play, every picture they look at and every video clip they show go over the network. Local storage can solve some of this but then you have independent data and the programs to access that data on the local machine. In other words you are back to managing a PC again.

Currently I am visiting my sister in Mobile, AL. I realized I needed to freshen up my blog and waiting till I got back home would be too long. No problem I thought. I have my iPad with me and it will be a chance to learn the basics of Blogsy. That’s what I’m doing now but it has been an enlightening experience and is the genesis of this post. What you need to know is that my sister’s house lacks WiFi. Since she and her husband spend a lot of time traveling in their RV, they use a Verizon 4G modem plugged into their laptop. That works for them but it doesn’t help me unless I go sit on my brother-in-law’s laptop. Of course there’s no need for that since my iPad has 3G. Oops! One big problem – the connection is unreliable. Here I am in Mobile, AL, a few miles from their regional airport and I can’t get a reliable data connection. I could launch into an AT&T tirade but that would miss the bigger picture. Mobile, AL is a major city. If I have problems here then what about more remote places? What about other countries? What if I were using a Chromebook? Right now I am writing this post. I will upload it when I have a better connection. I just can’t see buying into a usage model that demands 24/7 connectivity. For that reason I have no desire for a Chromebook. The Chromebook will fail.

Transparency of use is still coming but it will happen in a way that takes into account the issues I have just raised. Apple’s iCloud will sync data and leave a copy on each device. Microsoft Mesh does the same. I still believe that a modified version of this together with the Chromebook approach will win in the end. The difference will be that the modified Chromebook (phonebook?, Plattbook?, iBook?) won’t connect to the internet directly but will be a peripheral device for the phone. Your phone will be your wallet and as such always with you. It will also be your primary data device. It will sync with other devices through the cloud and be backed up to the cloud but interactive data access will be to the phone.