Posts Tagged ‘Apple’

First and foremost Apple sells a polished user experience. Apple sweats the details. From the moment you walk into the store the experience is polished and first rate. Unboxing your purchase continues the experience. Even Apple’s service group, AppleCare, is different. You get lots of attention from people who know what they are doing. Apple hardware has a lot of refinement. The OS feel is consistent and people consistently talk about Apple products as intuitive and easy to use.

I have written about convergence and transparency. These two trends play right into Apple’s strengths. Apple is selling more and more laptops because people have purchased iPhones. People who have purchased iPads are now buying iPhones. The release of OSX Lion moves the laptop closer to iOS. The iPhone and the iPad use the same OS. This means transparency of use. But, for the first time, I see Apple moving backwards. Their new policy requires that Apple receive 30% of any in-app purchase. I can see how Apple reached this point. Games would be offered for free in the Apple App Store. Once you started playing the game, you found out you had to do an in-app purchase to go beyond level 3. Apple saw this as a direct end run around their app store policies in order to avoid paying Apple their cut. Admittedly, at 30% that cut is big and hence companies, especially small ones, are highly motivated to avoid this form of app store “tax.” None of this is a big problem as long as we are talking about games. Things are different when it comes to magazines and books.

So far the best example of the move towards transparency has been the Kindle ecosystem. There are Kindle apps for just about every device. There are apps for Android, iPhone, iPad, Mac, and Windows. If you buy a book through any one app it is available on all of the others. Bookmarks are shared. You can read on your tablet, pick up on your phone and finish up on your laptop. In every case, when you move to a new device, the app knows where you left off on the old one. This is transparency of use in action. Now Apple is working to hinder that transparency.

Reading books is still a transparent experience. However, buying them now involves exiting the Kindle program and using a web browser to go to Amazon.com. You can’t even click a button in the Kindle app and have it open Safari using the appropriate URL. You can in the Mac Kindle app. What should really happen is that the Kindle store should be built into the Kindle app. I suspect it eventually will be on Android. It will never be on iOS devices. Apple’s 30% cut would change a money maker into a loss leader product. Not only is 30% too high, I see no reason Apple should get anything. The books aren’t being bought through Apple’s online store. Besides, it is anticompetitive. It gives Apple’s own iBooks a competitive pricing advantage. The problem is, iBooks isn’t as universal as Kindle. This small chink in Apple’s image is becoming a growing crack. Online forums have end users griping about it. This is a chance for Google to press Apple and change the image of Android vs. iOS.

Until now, Android has been an interesting phone OS beloved by techies for its openness and many features. Most consumers have viewed, and in fact still are viewing, Apple’s iOS as the more polished and bug free operating system for phones and tablets. Apple’s greed could change that. Android gets more and more polished day by day. If in app purchases become the norm for Android and the exception for iOS then consumers will see Android as the easier and more transparent operating system. Imagine the difference is Amazon makes Kindle apps have smooth integration with the Kindle store except for Kindle on Apple devices. As more people buy and read ebooks, this will push them towards Android instead of iOS. All you have to do is read this to see how Apple may be inadvertently causing apps to be less friendly. Android versions of the apps won’t be so limited.

Right now Apple’s new policy has done little other than make Apple richer and tick off some app writers. However, as Android keeps getting stronger, this policy might come to threaten Apple when consumers begin to find buying and reading ebooks and ezines easier and more transparent on Android than iOS.

Advertisement

HP paid $1.2B for Palm. Now they are dumping that and more. I have been saying that the only ecosystems that will survive are Apple, Google (Android) and Microsoft. The carnage has started. WebOS was a good OS. That doesn’t matter. It was too late, too poorly marketed and never got traction. Now it is essentially dead. RIM will follow although not in the near future.

More shocking is the announcement that HP may exit the PC market. HP leads the PC market in market share. How can they possible be wanting to exit that market? To understand why HP could even be considering this you need to look a little deeper. The laptop market is very competitive. That translates to low margins for everyone except Apple. Only Apple has a customer base willing to consistently pay a premium for their laptop product. Additionally, HP’s market share has been falling. But… here is the main reason. The phone is becoming the dominant computing device. The laptop is rapidly becoming secondary. Desktops are already secondary devices. The only way to shore up laptops in a way that would maintain margins was to develop an ecosystems with laptops as part of that. WebOS was a poor attempt at that. With the failure of WebOS, HP laptops will have to compete as just another part of the Microsoft ecosystem. That’s OK now but it will be a position that gets worse each day. If you count tablets as part of mobile computing then Apple has already surpassed HP in market share. What HP is afraid of is being trapped in a market that is losing relevance, decreasing in size and so commoditized that there is little differentiation. All that will lead to little or no profit.

The big take away from this is that it is not an isolated event. It is a part of the convergence trend I have been discussing. There will be more Titanic changes to come and they will involve more than RIM.

By now most readers will be aware that Google is buying Motorola Mobility. I started to write about this when I first heard the news but I wanted to think about it and explore the implications and potential reasons. Time is up. Here are my thoughts.

The most straight forward reason is patent defense. When Google lost out to Microsoft and Apple in the bidding for the Nortel patent portfolio it left Google in a very bad position. Android violates several of the Nortel patents. Google launched an offensive claiming Apple and Microsoft were using patents, as opposed to compelling solutions, as a way to attack Google. We must remember that Google also bid for these patents and, had they won, would have probably used them against Microsoft and Apple. Furthermore, an offer to join with Microsoft and Apple in acquiring the patents was rebuked by Google. If the purchase of Motorola Mobility is indeed a defensive play then this is nothing more than another round of that old patent game “I’ll cross license mine if you will cross license yours.” Considering the large amounts of cash Google is sitting on, this might be a very sensible move.

Could there be more to the acquisition than patents? Google has made cell phones in the past when it was jump-starting Android. But, should they be a cell phone producer? In the PC space Apple has been a small closed ecosystem compared to the loose and very diversified Microsoft ecosystem. The result was a larger, cheaper and more diversified hardware and software ecosystem for Windows (Microsoft) compared to OSX (Apple). Recall that, at one time (Apple II), Apple dominated the desktop space. The diversity of the Microsoft based environment resulted in Apple becoming a niche player. Today, despite Apple’s early lead, there is a strong possibility that Android will be the Windows of the smartphone and tablet space. I see no reason for Google to try to “out Apple” Apple. Think of the strange relationship that is going to exist with companies like HTC and Samsung. In the recent past, market pressure pushed those companies towards Google. Apple was closed to them. Microsoft Windows Phone 7 was open but Nokia was clearly customer number one and in a special, preferred customer, position. Now Google is not just a supplier but a competitor. I think Microsoft is secretly happy about all of this. It makes their relationship with Nokia look tame by comparison.

Could this be herd instinct? Apple makes the iPhone. HP bought Palm. Microsoft is in bed with Nokia. RIM makes Blackberry. Perhaps Google fell victim to the “everyone else is doing it” syndrome. Somehow I doubt it. The people at Google are nothing if not sharp. Still, it has happened at this level before.

One possible reason for the acquisition might be to push NFC. NFC requires that very specific hardware be placed inside smartphones. The Motorola Mobility arm of Google could push this. However, I think NFC can be effectively pushed without making the phones themselves. I don’t buy this as a reason for the acquisition.

That brings me to one final reason for the purchase – set top boxes. I have discussed how the real goal is a very broad and unified ecosystem. The TV is a big part of that. Google could merge GoogleTV into the Motorola Mobility set top box units. As a competitor in the set top box space they might be in a good position to drive their ecosystem. I have argued before that consumers don’t like extra boxes and hence AppleTV and even external game boxes (PS3, Wii, Xbox) are interim solutions. The one external box that has some life left is the cable box.  Google could merge the cable box, GoogleTV and Android games into one piece of hardware. Moving between cable product, internet streams and applications could be made very unified and essentially transparent to the consumer.

Summary: This acquisition is all about the patent portfolio and using it as a counter to Apple and Microsoft. However, Google is left with a hardware business that competes with key customers.

My recommendation: If I was willing to tell Apple what to do then why not another multibillion dollar company that is highly profitable? So Google, here is what you should do. Sell off the mobile device arm of Motorola Mobility but keep set top boxes. Keep all of the patents and just license them to the entity acquiring the cell phone business. Finally, merge GoogleTV into the cable box and make GoogleTV fully compatible with Android games. Use your new found cable box presence to drive a broader ecosystem that is more unified than what consumers have now.

If you have followed my blog from its inception you know I feel the phone will become your primary computer. That feeling continues to grow stronger. The more difficult issue is discerning just what path this will take. I have mentioned before that companies can fail by jumping to the final solution and not realizing that change often progresses along a jagged path. My ultimate dream is a device that connects to the proper interface in a transparent fashion.

Right now we have WiFi and Bluetooth. Apple lets AirPlay ride on WiFi. This gives some support for video transfer from an iPad to a TV but requires an Apple TV device to make it happen. However, none of this handles the high bandwidth needed to make the user interface, and the the high definition video that goes with it, work without compromise. Enter standards groups to the rescue; unfortunately too many groups.

A first stab at this came with wireless USB. This is an ultra wideband technology that allows up to 480Mbs speed but only at a range of 3 meters. This is inadequate for 1080p 60 Hz video much less 3D and higher resolutions. This technology has gotten very little traction.

The early leader was WHDI (Wireless Home Digital Interface) consortium. However, the WirelessHD Consortium has an impressive list of supporters. Next comes the Wireless Gigabit Alliance or WiGig. They also have some big players behind them including some of the same people in WirelessHD. It’s all very confusing.

Recall what I said about major vs. minor trends. This has signs of being a major trend. But wait, it doesn’t “feel” that way. People aren’t scrambling to get wireless video hardware. That’s going to change. There is a lot in the works and it will take time to gel but it will gel.

Who am I betting on? Well, I’ll start with an interesting fact. Of particular interest here is the adoption of support for wireless DisplayPort by WiGig. Not mentioned on the WiGig website is an important name – Apple. Recall that Apple is the big force behind DisplayPort. A second force pushing WiGig is the movement by companies like Panasonic to take WiGig mobile. WIDI is mobile capable but has more challenges extending its speed and flexibility. Another related major announcement is the Qualcomm Atheros AR9004TB chip for WiGig. However, this looks suited for laptops and docking stations and not phones. It will compete with solutions for WirelessHD such as the SiI6320/SiI6310 WirelessHD® HRTX Chipset.

How does this play out? The Qualcomm chip shows the way to docking stations for tablets and phones. These may have some success but the need is for a more embedded solution. That will start with laptops which have the luxury of more board space and larger batteries. However, it will move into phones once the power issue is solved. This won’t be the end. So far I have been discussing wireless video. True transparency will require something more general. For that I expect something like wPCIe from Wilocity to allow full connectivity. Initially wPCIe will allow laptops to wirelessly dock with peripherals. Longer term, this too will migrate into the tablet and the phone. At that point your phone will wirelessly dock with external hard drives, displays, and pretty much anything else you would hook to a desktop.  wPCIe is based on the WiGig standard so it will be a quick extension to WiGig wireless video. That also means that range will be adequate to allow your phone or laptop to be several meters away from the other end of the wirless link.

Currently, none of this matches the speed of Thunderbolt but it may be close enough. WirelessHD has higher speeds already defined and I expect WiGig to follow. Expect WiGig to look a lot like wireless Thunderbolt. Thunderbolt is basically DisplayPort and PCI Express (PCIe). WiDig will also include DisplayPort and PCIe. For true speed freaks, a hard wired connection will always be the best. Thunderbolt will move to 100Gbs when the move is made from copper to fiber. By then WiGig and WirelessHD will just be matching copper connected Thunderbolt in performance.

There’s a lot more at play here that makes it difficult to predict the winner. WIDI works at lower frequencies and can connect through walls. WirelessHD and WiGig are strictly line of sight. However, some of the claims for future versions of WIDI are suspect since they involve very high data rates relative to available frequency bandwidth. WiGig has the ability to move from a WiFi connection to a WiGig connection in a transparent fashion. WIDI is mobile capable now since it rides on older WiFi technology. I am uncertain when a low power WiGig or WirelessHD chip will be available.

Cliches exist because they contain truth in an easy to digest form. There’s an old saying among engineers. “Anyone can build a bridge. It takes a good engineer to do it on time and under budget.”  That one holds the essence of why I consider good engineering more difficult to accomplish than good science. My formal training was as a scientist. I have been around scientific research in both the theoretical and experimental areas and I certainly appreciate the difficulties involved. However, it is the imposition of schedule and budget into engineering that makes it even more difficult than good science. Budget doesn’t just apply to the resources involved in the creation of the item but also involves the cost of manufacture. Great engineering means understanding “Just good enough.” Like many topics in this blog the concept of “Just good enough” is much broader and more important than many people think. It is related to the concept of quality. In his book Quality is Free, Philip Crosby defines quality as “conformance to requirements.” Great engineering meets the customer’s needs in the best manner. Best, in most cases, means finding a solution the customer can afford. For this reason designing a mid sized sedan like the Honda Accord is much more difficult than designing something like a Ferrari Italia. The Accord is in a much more competitive space and has tremendous budget constraints. If you want to upgrade the audio system then you have to find cost savings elsewhere. Many thousands of components have characteristics that must be traded off in order to meet the target price-point. The Ferrari design starts by asking “What’s best?” Just for fun, when it comes to the Accord, you get to layer on tougher customer expectations. The Accord isn’t a showpiece. It is a day-to-day working automobile and must perform perfectly for many years with few service needs. The Ferrari is expected to require some pampering. Even several year old Ferraris usually have just a few thousand miles on them. The Accord is a much tougher design challenge.

One engineer I admire is Steve Wozniak. If you look at the Apple II, the computer that made Apple a real company, you find many examples of awesome engineering. Again and again features are included and performance is achieved with elegant rather than brute force design. The result was a great combination of features at a reasonable price for its day. To highlight what I mean by “just good enough” I am going to single out just one of the many elegant design choices in the Apple II; but first I need to set the stage.

The personal computing era was kicked off in 1975 with the January issue of Popular Electronics. The cover article was on the construction of a computer kit called the MITS Altair 8800. With it came the introduction of the S100 bus. The Altair 8800 was a frame style design where cards were added to increase functionality. While many functions such as main memory have moved to the motherboard, we retain this expansion concept today although the S100 bus has mostly moved into history.

The Altair 8800 was copied by many companies and expanded upon. The S100 bus became an industry standard expansion bus. Lots of companies made cards for the S100 bus. Because of this a lot of computers placed only the basics on the motherboard in an effort to control price. There are problems with this approach. Since there was no game controller (joystick, paddle, buttons) functionality included in the Altair, there was no standardized game interface. I once looked at the cost of adding joysticks to an S100 based computer. The card alone was several hundred dollars. The approach involved expensive analog to digital converters (ADCs). The result was that only keyboard based games evolved for the S100 based machines.

During this time, games like Pong and Breakout were popular. It made sense to bring them to personal computers but they required interactive game controllers i.e. paddles or joysticks. A keyboard used as a controller lacked the same smooth interactivity. Using the keyboard for games was a compromise aimed at satisfying the engineers and accountants and not the customers but it was a compromise most computer manufacturers had adopted. Enter Apple and a few others. In 1977 Apple introduced the Apple II. It came with game paddles along with games like Breakout. To accomplish this in a cost effective manner, Wozniak pushed most of the design into software. Since he had designed Breakout in hardware for Atari, this was a big change in mindset. Great engineers adopt what is best as opposed to just reworking what they did in the past. Simplifying hardware and pushing complexity into software would turn out to be a very important trend. Here was that trend at a very early stage. Look at the schematic below.

This is part of the schematic of the Apple II included in the Apple II Reference Manual dated January 1978. What looks like a 553 integrated circuit (H13) is actually a 558. This is a quad version of the venerable 555 timer chip. The 558 is used to generate four paddle, or two joystick, inputs. Each paddle is just a variable resistor. Hooked into the 558, the resistance of the paddle controller determines the oscillation frequency of a simple RC oscillator. A loop in the code keeps reading the oscillator. The microprocessor can only read a 1 or a 0. If the voltage is above a certain level the microprocessor sees a 1. Below that it sees a 0. The Apple II loops while looking at the game paddle input. By looking at the pattern, for example 111000111000111000, it can determine the frequency of oscillation. This is then related to a game paddle position and the screen paddle is moved to the appropriate screen position. The beauty of this is that the paddle controller doesn’t have to be super linear. The paddles just need to be consistent i.e. all paddles need to act the same way. Nonlinearities can be corrected in software. To the user, using visual feedback as he looks at the screen while turning the paddle, this is all “just good enough.” It is also a high quality solution since it meets the user’s expectations and the requirements for playing games like Breakout. Including games and controllers gave the Apple II great consumer appeal and was a big part of its success and with it the success of Apple Computer.

Today we often see companies just iterating on a theme. These are the so-so companies. Great companies sit back, look at the bigger picture and think about possibilities. Rather than layering expensive, iterative solutions on each other, the great companies rethink the approach and create solutions that are cost effective while meeting user requirements. Exceptional companies go beyond this and create solutions to user requirements that the user didn’t know he had. That, however, is a topic for another post.

I’m back home and connected. Yeah! My kids are happy since World of Warcraft now works well. I’m trying to catch up and realized I haven’t posted in several days. Next week won’t be any better since I will be heading to Houston for a behind the scenes tour of Mission Control. I hope that trip is as much fun as I expect it will be.

Now to the techie stuff. I was flying today and the conversation turned to how things should work vs. how they really work. Of course the initial topic was about flying. I was working through approach procedures using a new autopilot. I fly a Cirrus SR22 equipped with Avidyne R9 avionics. Recently the autopilot was upgraded from the STEC 55X to the Avidyne DFC-100. This is a big upgrade. The STEC understood rate of turn (from a turn coordinator), altitude (air pressure sensor), course error (from Horizontal Situation Indicator), and GPS course. The new autopilot receives input from the GPS, Flight Management System and the Air Data Attitude Heading Reference System. In other words it knows just about everything about the airplane and its condition. It even knows flap position and engine power. The end result is a vastly superior autopilot. Sequencing is automatic (most times – see below). You can put in a flight profile and the plane will fly it including climbs and descents. The operation is very intuitive and a great example of intelligent user interface design. If you are climbing at a fixed IAS (Indicated AirSpeed) and set up to lock onto a fixed altitude the IAS button is green to show it is active and the ALT button is blue to show it is enabled but not locked. When you get to the desired altitude the ALT light blinks green and then goes steady green when locked onto the desired altitude. I could go on and on about how great this is and if you have questions just ask.

Now to more specifics about interface design. When you use the DFC-100 autopilot to fly an instrument landing system, ILS, approach, it is very automatic. If you punch VNAV, vertical navigation, you can  have the autopilot fly the entire procedure including the appropriate altitudes. When the radio signal of the ILS is received and verified correct (all automatic) the system shifts to using the electronic ILS pathway to the runway. So far everything has been very automatic. If you exit the clouds and see the runway you disconnect the autopilot and land. The problem comes when the clouds are too low to see the runway even when you are close and down low. This is a very dangerous time. At the critical point the plane is 200′ above the ground and there is little margin for error. If you don’t see the ground you execute the missed approach. This is where the great user interface breaks down. If you do nothing the autopilot will fly the plane into the ground. In order to have it fly the missed approach the following must happen. After the final approach fix, but only after, you must press a button labeled Enable Missed Approach. At the decision height when you are 200′ above the ground you must either disconnect the autopilot and start the missed approach procedure manually or shift from ILS to FMS as the navigation source and press the VNAV button. I can hear people, including pilots, asking me what the big deal is. The big deal is that this is when you really want the automatic systems looking over your shoulder and helping out. If you forget to shift from ILS to FMS the plane will want to fly into the ground. That’s a very bad thing. The system is still great. Even at this moment it is much better than the old system. I am not saying I would want to go back. I am saying it could be better and that this operation doesn’t fit with how seamless the autopilot’s operation usually is. What the system should do is automatically arm the missed approach. I see no reason for this to be a required manual operation with the potential to be forgotten. The pilot should select the decision height at which the missed approach will begin to be executed. When that point is reach, if the autopilot has not been disconnected, the autopilot should start flying the missed approach including VNAV functionality. That includes shifting the navigation source from ILS to FMS automatically.  The result would be increased safety since the system wouldn’t be requiring command input from the pilot at a critical moment.

The discussion above relates to what I have been covering in this blog. As computing systems improve and move into every area of our lives, issues like the one above will pop up. Everything about the DFC-100 is vastly superior to the old STEC. The issue is consistency of use. As our computing systems get better and better user interfaces, minor inconsistencies will appear to us as big annoyances. Look at the iPad. If you think of it as an eBook reader that lets you view mail and surf the web it is an awesome device. If you look at it as a fun device with simple apps and games it is awesome. As soon as you want it to be your main computer, things like the lack of a user accessible directory structure become big. Compared to the old Newton or the PDA, the iPad and the iPhone are major advances. However, with this new capability comes raised expectations. Developers don’t get to do great things and then sit back. As soon as users get comfortable with the new, next great thing they begin to find annoyances. One of Apple’s strengths has been minimizing these annoyances but even on the best devices they are there. Consistency of user experience is a big deal. Getting there is tough. My point is that small details matter. How the icons look, how smooth the scrolling is, the animation when actions are taken are all small things that matter. One of the reasons for the success of the iPad and iPhone has been this consistency and sweating the details when it comes to the user interface. As we merge devices and functions in the post PC world it will be critical that these disruptions, the non-transparent use scenarios be identified and fixed.

I thought about making the title of this post “I’m Right – They’re Wrong.” While I like the cloud for data everywhere and for syncing of data, I don’t believe in data ONLY in the cloud. There has been a lot of press around putting everything in the cloud. The Chromebook is one attempt at this. On the surface, my techie side gets excited. I hear cheap, long battery life, one data set and a unified experience across devices. The major thing I hear is low upkeep. Someone else does most of the application updates and makes sure things work. This last part, however, sounded hauntingly familiar. Then it hit me. This was the promise of thin clients. A long time ago in a different computing world, thin clients were going to save companies lots of money. The clients themselves would be cheaper. Falling PC prices killed that as a major selling point. The second thing was ease and consistency of software maintenance. The problem was that the world went mobile. People couldn’t afford to lose software access when they weren’t on the corporate network. In the end thin clients failed. Fast forward to today. The same issues apply to the Chromebook. Why get a Chromebook when a netbook can do so much more? Then there is the issue of connectivity. What happens when there isn’t a WiFi hotspot around? Are you thinking 3/4G? Think again. Look at today’s data plans and their capped data. Most people can’t afford to have everything they type, every song they play, every picture they look at and every video clip they show go over the network. Local storage can solve some of this but then you have independent data and the programs to access that data on the local machine. In other words you are back to managing a PC again.

Currently I am visiting my sister in Mobile, AL. I realized I needed to freshen up my blog and waiting till I got back home would be too long. No problem I thought. I have my iPad with me and it will be a chance to learn the basics of Blogsy. That’s what I’m doing now but it has been an enlightening experience and is the genesis of this post. What you need to know is that my sister’s house lacks WiFi. Since she and her husband spend a lot of time traveling in their RV, they use a Verizon 4G modem plugged into their laptop. That works for them but it doesn’t help me unless I go sit on my brother-in-law’s laptop. Of course there’s no need for that since my iPad has 3G. Oops! One big problem – the connection is unreliable. Here I am in Mobile, AL, a few miles from their regional airport and I can’t get a reliable data connection. I could launch into an AT&T tirade but that would miss the bigger picture. Mobile, AL is a major city. If I have problems here then what about more remote places? What about other countries? What if I were using a Chromebook? Right now I am writing this post. I will upload it when I have a better connection. I just can’t see buying into a usage model that demands 24/7 connectivity. For that reason I have no desire for a Chromebook. The Chromebook will fail.

Transparency of use is still coming but it will happen in a way that takes into account the issues I have just raised. Apple’s iCloud will sync data and leave a copy on each device. Microsoft Mesh does the same. I still believe that a modified version of this together with the Chromebook approach will win in the end. The difference will be that the modified Chromebook (phonebook?, Plattbook?, iBook?) won’t connect to the internet directly but will be a peripheral device for the phone. Your phone will be your wallet and as such always with you. It will also be your primary data device. It will sync with other devices through the cloud and be backed up to the cloud but interactive data access will be to the phone.

At WWDC Apple announced an improved AirPlay in iOS5. I have broken this out for a separate post because it has gotten little attention from the mainstream press and has huge near and long term implications. The key new feature to focus on is AirPlay mirroring. In the near term this is all about corporate penetration. Mirroring works on the iPad 2 and allows you to display the screen on a separate device; for example a TV with Apple TV attached. This is another step towards using the iPad as a presentation device. All that is needed is a wireless receiver that can be hooked to the projectors now standard in corporate meeting rooms. That would allow cordless mobility using the iPad as a small, easy to hold, presentation device. There is a lot of near term potential here. This is about way more than a few extra iPad sales. Apple has always been viewed as a consumer company. The iPad is changing that and the result is big. RIM had the iPhone locked out of the corporate market. Recent security improvements on the iPhone together with the iPad being adopted in the corporate market has changed that. The result is that RINM is losing its hold on the corporate world. Driving the iPad deeper into the corporate world will extend this and prevent the Playbook from getting traction. The iPad has the potential to be the de facto corporate presentation device. Apple just needs to listen to me and make the wireless battery powered AirPlay display adapter. Throw in transparent collaborative syncing of files and corporate presentations just got a lot easier and slicker.

In the long term AirPlay mirroring takes on even greater importance in an entirely different way. First, you have to move AirPlay mirroring to the phone. Then add in a data link over Bluetooth. What you now have is the ability to merge the phone completely into the automobile. This will take a lot of work to be done in a way that is clean and aids rather than distracts the driver. As a simple example, however, imagine playing movies stored on your phone on a display in the car. Another example would be using the GPS and navigation software in your phone to display a map and directions on the display in your car along with voice guidance through the car’s audio system. Commands would be given through controls on the steering wheel and voice commands. This is a small but important step towards making the phone the dominant computing platform by a wide margin.

I have discussed the need for wireless charging on several occasions and mentioned a new ultrasonic technique here. One issue with the ultrasonic method is inefficient penetration of materials. For example, a simple mobile phone case has the potential to prevent charging. My version of transparency demands that the user have to do nothing to connect. It needs to occur well ummm…. transparently. I mentioned the problem with inductive charging being range and the need for a one meter range. Well, I was mistaken about the limitations of inductive charging. In fact, mistaken is an understatement. Back in 2007, Karalis, Joannopoulos, and Soljac published “Efficient wireless non-radiative mid-range energy transfer” in Annals of Physics. Apple was more observant than I am and picked up on this. The result is their patent “Wireless Power Utilization in a Local Computing Environment.” It describes wireless charging over about a one meter range. The near term impacts of this are small but the long range impact will almost certainly be huge. This isn’t a one off patent from Apple. They have already been looking at more typical very short range inductive charging solutions. For example patent 7352567, “Methods and apparatuses for docking a portable electronic device that has a planar like configuration and that operates in multiple orientations”, describes a wireless charging and data connection base for the iPad. It’s interesting for also including a wireless data connection in the base.

In terms of long range potential, imagine having your phone charged while you drive your car or while you sit at your desk at work or when you sit in your recliner at home watching TV. Freeing the phone from its battery limitations opens it up to become your primary computing device where you are able to rely on it always being there. This is huge. Google, Apple and Microsoft are all working on syncing through the cloud. However, that only goes so far. They are syncing data and not applications. Also, it will be a long time before truly high speed wireless data is everywhere.

I mentioned that Mango showed that Microsoft could come on strong once they recognized they were behind. I saw a few unexpected features in Mango and it gave me hope that Microsoft was still in the game if very far behind. However, with the release of more information about Windows 8, I am truly surprised. Microsoft really gets it. They see the need for a unified OS across platforms and for a transparent user experience. Furthermore, Microsoft is using its strength on the desktop to leverage itself into the tablet and phone space. This isn’t my pick for the easiest path in general but it is the easiest and best way for Microsoft. More than other releases, Windows 8 will be about an aggressive business strategy. I love it when business, the consumer, and engineering mesh at such an intimate level.

Windows 8 is important on several levels. First, let’s start with the fact that it will not only run on X86 CPU’s but on ARM. Wow! Let that sink in. This means Windows on a CPU that isn’t compatible with the Intel X86 architecture. There will be no emulation layer so current X86 apps won’t run on ARM based hardware. However, this is important in and of itself. Microsoft will be encouraging developers writing lighter apps to write in Java and HTML5 so the apps will be independent of the CPU used. Add this to Apple toying with the idea of an ARM based MacBook Air and you know why Intel is nervous.

The next surprise is the breadth of Windows 8. It is really a tablet  OS where the mouse and keyboard can substitute for touch. You read that correctly. The OS is, in many ways, a tablet OS first and a desktop OS second. This doesn’t mean a compromised desktop OS. What it does mean is an OS with touch infused throughout.  The same OS will run on tablets, laptops and desktops.

They say a picture is worth a thousand words and the next surprise is best illustrated with a couple of pictures. Here is one of Windows 8 on a PC:

Next I have a picture of the home screen from a phone running Windows Phone.

Do you see what I am excited about? Just like Apple, Microsoft is making the desktop OS look and feel like the phone OS. Do you believe me now when I talk about the push for transparency of the computing experience? Now go back to the comment above about Microsoft pushing for apps written in HTML5 and Java. Those will be easy to port to Windows Phone and vice versa. Microsoft may be late but they are coming on strong.

What does this mean on the business side? Obviously the push onto ARM is a threat to Intel and AMD. In terms of the other hardware and software players here is how I see it. RIM is in an increasingly bad position. They have zero desktop presence and Microsoft is stronger in the corporate world than RIM. Windows 8 might seem independent of RIM’s Blackberry world but, in actuality, it has the potential to do great damage. HP may take a hit too. They are betting a lot on WebOS. I don’t see what the value add is for WebOS. Call this one more wait and see but be skeptical. HP could quickly shift to being Windows 8 centric if need be. Heck, they are Windows centric today.  Apple probably fairs OK in the near term. Longer term they might lose some of their momentum. However, I see Apple as the best positioned against Windows 8 if they can continue to move towards merging iOS and OSX. I’m still very strong on Apple. Next up for Apple is iOS 5 and iCloud which will be announced next week. Windows 8 could be problematic for Google. I have trouble believing in Chrome as a desktop OS. Google will still be ahead in the TV space but compared to Microsoft and Apple they lack the desktop. Android is the largest selling smartphone OS and we are about to be inundated with Android tablets including some excellent ones such as the Samsung 10.1. I still see Microsoft being behind Google but it is a lot more interesting than it was a day ago. Apple just made iWork available on the iPhone in addition to the iPad and OSX devices. Microsoft will have Office running across all devices. Will people buy into Google’s idea that web based solutions are the best answer for their productivity apps? People may but only if Microsoft screws things up. Then again, Microsoft mucked things up in the past with poorly conceived products like Works.