Archive for the ‘Transparency’ Category

At WWDC we got another chance to see Tim Cook in action. Steve Jobs was always the master presenter and many had wondered how Apple would fare with Cook at the helm of events like WWDC. This year’s event brought us iOS 6, OSX Mountain Lion, MacBook Pro with Retina Display and some minor updates to other Apple hardware. By the time it was all done I was in lust wanting the MacBook Pro with Retina Display. I can’t wait for Mountain Lion or iOS 6. Why? Heck I don’t know. I’m sure, however, that it will be great.

As time has passed I realize I can live with my old laptop another year and Mountain Lion and iOS6 will be nice when they get here but I’m doing just fine right now.  You have to admire how great a show Apple puts on. It is polished and has enough hype to excite but not so much that you stop believing. All in all a masterful job and Cook is keeping the tradition alive.

When it comes to Tim Cook at WWDC a few things thing stood out. He didn’t try to be Steve Jobs. He didn’t say “insanely great” every other sentence. He was himself while at the same time being a long term Apple employee. He exuded the culture. He was calm and confident but dressed down. Without mimicking his predecessor, the feeling that great things were being shown emanated from him. Color me impressed.

Oh how Steve Balmer needs lessons from the Apple book on giving presentations. Shortly after WWDC, Microsoft called a meeting to introduce the Surface line of tablets. Balmer looked like a person with a losing hand trying to make people believe it was great. The sad thing is that the Microsoft announcement had more meat than Apple’s WWDC event. Some of the other presenters were pretty good. The point was driven home about seeking perfection in even the small things such as how the stand sounds when you close it. That, however, just served to highlight how important the master of ceremonies is at these things. Every time the event turned back to Balmer, it was like a chill fell over the presentation.  What was needed was a Steve Jobs clone telling me how insanely great this was and making me feel that my life was going to be different because of it. It need someone who could make me believe. Balmer made me lose faith. What is sad is that, in hindsight, the Microsoft announcement is major and has long term implications including putting pressure on Apple and Google. I’ll discuss why in later posts. This post is about form over substance.

One final thought involves the effect this has on the press. After WWDC the press was mostly positive. There was disappointment at no MacBook Airs with Retina Display and some discussion that the rest of the updated MacBook Pro line was a stop gap measure. All of this was done with what Apple would consider appropriate reverence and the tone was, overall, very Apple fanboy in nature. Compare that to the Microsoft Surface announcement which led to many skeptical articles with everything being dissected – power, RT incompatibility, product line confusion, display resolution etc. Where are the raves? It seems to come down to nothing more than the fact that Microsoft isn’t cool and Apple is.

Advertisement

This is certainly a belated post. I have been meaning to write it for many months but kept getting distracted. CES came and went with little that was earth shattering but a lot that was incremental. TV’s are more connected than ever while also getting bigger and thinner. Computers are slimmer and faster. The Macbook Air line is finally getting some serious competition but the pricing appears to be less than stellar. Here is a case where the Apple tax may be less than people suspect. SSD’s are slowly replacing hard drives and SSD speeds continue to increase. If you haven’t replaced your main hard drive with an SSD then you are in for a treat along with the concomitant blow to your wallet. Tablets are rushing forward. Vastly lower pricing should open tablets up to many more people and cause Android market share to surge. NFC is moving forward and uses are expanding. By 2013 I expect most top end smartphones will support NFC and that includes Apple.

There was, however, one area that brought a small amount of excitement – automotive. I have blogged before about Ford and their moves forward. There is a summary of the automotive announcements at Engadget so I won’t repeat a lot of it here. In general, phones, especially the iPhone, are being better integrated into automobiles and the move towards running apps on the automobile’s systems gets closer to reality. Right now most apps are proprietary but their numbers are increasing. Automobiles are getting more tightly connected to the web with the ability to send data between car and home. Back in 2009 GM and Ford announced that they intended to build Android cars. Here it is 2012 and we are still waiting but things are moving forward. The Chinese are there with the Roewe 350. Ford, GM, Mercedes et. al. are moving closer. In the end transparency of use and data will prevail and the automobile will merge seamlessly with the phone, TV and tablet.

First and foremost Apple sells a polished user experience. Apple sweats the details. From the moment you walk into the store the experience is polished and first rate. Unboxing your purchase continues the experience. Even Apple’s service group, AppleCare, is different. You get lots of attention from people who know what they are doing. Apple hardware has a lot of refinement. The OS feel is consistent and people consistently talk about Apple products as intuitive and easy to use.

I have written about convergence and transparency. These two trends play right into Apple’s strengths. Apple is selling more and more laptops because people have purchased iPhones. People who have purchased iPads are now buying iPhones. The release of OSX Lion moves the laptop closer to iOS. The iPhone and the iPad use the same OS. This means transparency of use. But, for the first time, I see Apple moving backwards. Their new policy requires that Apple receive 30% of any in-app purchase. I can see how Apple reached this point. Games would be offered for free in the Apple App Store. Once you started playing the game, you found out you had to do an in-app purchase to go beyond level 3. Apple saw this as a direct end run around their app store policies in order to avoid paying Apple their cut. Admittedly, at 30% that cut is big and hence companies, especially small ones, are highly motivated to avoid this form of app store “tax.” None of this is a big problem as long as we are talking about games. Things are different when it comes to magazines and books.

So far the best example of the move towards transparency has been the Kindle ecosystem. There are Kindle apps for just about every device. There are apps for Android, iPhone, iPad, Mac, and Windows. If you buy a book through any one app it is available on all of the others. Bookmarks are shared. You can read on your tablet, pick up on your phone and finish up on your laptop. In every case, when you move to a new device, the app knows where you left off on the old one. This is transparency of use in action. Now Apple is working to hinder that transparency.

Reading books is still a transparent experience. However, buying them now involves exiting the Kindle program and using a web browser to go to Amazon.com. You can’t even click a button in the Kindle app and have it open Safari using the appropriate URL. You can in the Mac Kindle app. What should really happen is that the Kindle store should be built into the Kindle app. I suspect it eventually will be on Android. It will never be on iOS devices. Apple’s 30% cut would change a money maker into a loss leader product. Not only is 30% too high, I see no reason Apple should get anything. The books aren’t being bought through Apple’s online store. Besides, it is anticompetitive. It gives Apple’s own iBooks a competitive pricing advantage. The problem is, iBooks isn’t as universal as Kindle. This small chink in Apple’s image is becoming a growing crack. Online forums have end users griping about it. This is a chance for Google to press Apple and change the image of Android vs. iOS.

Until now, Android has been an interesting phone OS beloved by techies for its openness and many features. Most consumers have viewed, and in fact still are viewing, Apple’s iOS as the more polished and bug free operating system for phones and tablets. Apple’s greed could change that. Android gets more and more polished day by day. If in app purchases become the norm for Android and the exception for iOS then consumers will see Android as the easier and more transparent operating system. Imagine the difference is Amazon makes Kindle apps have smooth integration with the Kindle store except for Kindle on Apple devices. As more people buy and read ebooks, this will push them towards Android instead of iOS. All you have to do is read this to see how Apple may be inadvertently causing apps to be less friendly. Android versions of the apps won’t be so limited.

Right now Apple’s new policy has done little other than make Apple richer and tick off some app writers. However, as Android keeps getting stronger, this policy might come to threaten Apple when consumers begin to find buying and reading ebooks and ezines easier and more transparent on Android than iOS.

If you have followed my blog from its inception you know I feel the phone will become your primary computer. That feeling continues to grow stronger. The more difficult issue is discerning just what path this will take. I have mentioned before that companies can fail by jumping to the final solution and not realizing that change often progresses along a jagged path. My ultimate dream is a device that connects to the proper interface in a transparent fashion.

Right now we have WiFi and Bluetooth. Apple lets AirPlay ride on WiFi. This gives some support for video transfer from an iPad to a TV but requires an Apple TV device to make it happen. However, none of this handles the high bandwidth needed to make the user interface, and the the high definition video that goes with it, work without compromise. Enter standards groups to the rescue; unfortunately too many groups.

A first stab at this came with wireless USB. This is an ultra wideband technology that allows up to 480Mbs speed but only at a range of 3 meters. This is inadequate for 1080p 60 Hz video much less 3D and higher resolutions. This technology has gotten very little traction.

The early leader was WHDI (Wireless Home Digital Interface) consortium. However, the WirelessHD Consortium has an impressive list of supporters. Next comes the Wireless Gigabit Alliance or WiGig. They also have some big players behind them including some of the same people in WirelessHD. It’s all very confusing.

Recall what I said about major vs. minor trends. This has signs of being a major trend. But wait, it doesn’t “feel” that way. People aren’t scrambling to get wireless video hardware. That’s going to change. There is a lot in the works and it will take time to gel but it will gel.

Who am I betting on? Well, I’ll start with an interesting fact. Of particular interest here is the adoption of support for wireless DisplayPort by WiGig. Not mentioned on the WiGig website is an important name – Apple. Recall that Apple is the big force behind DisplayPort. A second force pushing WiGig is the movement by companies like Panasonic to take WiGig mobile. WIDI is mobile capable but has more challenges extending its speed and flexibility. Another related major announcement is the Qualcomm Atheros AR9004TB chip for WiGig. However, this looks suited for laptops and docking stations and not phones. It will compete with solutions for WirelessHD such as the SiI6320/SiI6310 WirelessHD® HRTX Chipset.

How does this play out? The Qualcomm chip shows the way to docking stations for tablets and phones. These may have some success but the need is for a more embedded solution. That will start with laptops which have the luxury of more board space and larger batteries. However, it will move into phones once the power issue is solved. This won’t be the end. So far I have been discussing wireless video. True transparency will require something more general. For that I expect something like wPCIe from Wilocity to allow full connectivity. Initially wPCIe will allow laptops to wirelessly dock with peripherals. Longer term, this too will migrate into the tablet and the phone. At that point your phone will wirelessly dock with external hard drives, displays, and pretty much anything else you would hook to a desktop.  wPCIe is based on the WiGig standard so it will be a quick extension to WiGig wireless video. That also means that range will be adequate to allow your phone or laptop to be several meters away from the other end of the wirless link.

Currently, none of this matches the speed of Thunderbolt but it may be close enough. WirelessHD has higher speeds already defined and I expect WiGig to follow. Expect WiGig to look a lot like wireless Thunderbolt. Thunderbolt is basically DisplayPort and PCI Express (PCIe). WiDig will also include DisplayPort and PCIe. For true speed freaks, a hard wired connection will always be the best. Thunderbolt will move to 100Gbs when the move is made from copper to fiber. By then WiGig and WirelessHD will just be matching copper connected Thunderbolt in performance.

There’s a lot more at play here that makes it difficult to predict the winner. WIDI works at lower frequencies and can connect through walls. WirelessHD and WiGig are strictly line of sight. However, some of the claims for future versions of WIDI are suspect since they involve very high data rates relative to available frequency bandwidth. WiGig has the ability to move from a WiFi connection to a WiGig connection in a transparent fashion. WIDI is mobile capable now since it rides on older WiFi technology. I am uncertain when a low power WiGig or WirelessHD chip will be available.

While I was out flying today the discussion moved to how things should work. As I posted earlier, the initial discussion was about autopilot operation. It later turned to meetings and how they will change in the future. I have talked about the phone as the primary computing device. I want to outline how this will merge with the future meeting room. You are at your desk working on material for the meeting when the clock becomes your enemy and it is time to go. You are able to work right up to the last minute because your work will move with you. You stand up and the screen on your desk goes dark. You walk to the boardroom and sit down. In front of you a screen, keyboard and mouse become active. You are back where you were. There is one addition. The conference room is on its own subnet. When you sat down several things happened. Inductive circuitry in the chair is charging your phone. A short range link has connected the screen, keyboard, and mouse. The room is on its own small subnet. as you sit down you are connected and contact information and picture are collected. If you are a company member you are connected back to the main network. If you are a visitor it is a guest network which allows internet access but keeps the internal network isolated. On your screen you see a graphic of the meeting room table. At each location is a picture along with the name of the person sitting at that position. A click on the image reveals the information on a standard business card. During the meeting Bill asks if you have received the latest proposal from legal. He needs to see it when you are done. You say you have received it and have finished marking it up. You drag it to Bill’s image on the conference room graphic and a copy is sent to Bill. Now it’s your turn to present. Fortunately you ready. A simple click and your presentation is on the large display. A click on your tablet brings up the presentation complete with speaker notes. As you stand up the screen on the meeting room table goes blank and the phone isn’t being charged but you are still on the meeting room network and your presentation is still displayed on the large screen. You move seamlessly between devices and use the one best suited for the moment.

A few days later there is a meeting of a different kind. It’s a late night conference call. Hey, that comes with being part of a central support organization for a company with operations in China. You sit down at a desk and start the video call. Like the boardroom example you are able to transfer files by dragging and dropping them to the picture of someone on the call. This scenario is pretty much here today. What needs to be done is to make the user interface more transparent when being used but the conference call scenario is pretty much here. With cameras now standard on both PC’s and tablets expect video conferencing to increase a lot over the next two years. One new addition will be the capability to seamlessly transfer the call from device to device. A call might be started on your phone. as you walk into your office it would transfer to the large screen on your desk. A personal call might start out on your TV but be transferred to your phone as you head out. In your car you wouldn’t have video but that would be on your phone when you got to your destination.

 

 

I’m back home and connected. Yeah! My kids are happy since World of Warcraft now works well. I’m trying to catch up and realized I haven’t posted in several days. Next week won’t be any better since I will be heading to Houston for a behind the scenes tour of Mission Control. I hope that trip is as much fun as I expect it will be.

Now to the techie stuff. I was flying today and the conversation turned to how things should work vs. how they really work. Of course the initial topic was about flying. I was working through approach procedures using a new autopilot. I fly a Cirrus SR22 equipped with Avidyne R9 avionics. Recently the autopilot was upgraded from the STEC 55X to the Avidyne DFC-100. This is a big upgrade. The STEC understood rate of turn (from a turn coordinator), altitude (air pressure sensor), course error (from Horizontal Situation Indicator), and GPS course. The new autopilot receives input from the GPS, Flight Management System and the Air Data Attitude Heading Reference System. In other words it knows just about everything about the airplane and its condition. It even knows flap position and engine power. The end result is a vastly superior autopilot. Sequencing is automatic (most times – see below). You can put in a flight profile and the plane will fly it including climbs and descents. The operation is very intuitive and a great example of intelligent user interface design. If you are climbing at a fixed IAS (Indicated AirSpeed) and set up to lock onto a fixed altitude the IAS button is green to show it is active and the ALT button is blue to show it is enabled but not locked. When you get to the desired altitude the ALT light blinks green and then goes steady green when locked onto the desired altitude. I could go on and on about how great this is and if you have questions just ask.

Now to more specifics about interface design. When you use the DFC-100 autopilot to fly an instrument landing system, ILS, approach, it is very automatic. If you punch VNAV, vertical navigation, you can  have the autopilot fly the entire procedure including the appropriate altitudes. When the radio signal of the ILS is received and verified correct (all automatic) the system shifts to using the electronic ILS pathway to the runway. So far everything has been very automatic. If you exit the clouds and see the runway you disconnect the autopilot and land. The problem comes when the clouds are too low to see the runway even when you are close and down low. This is a very dangerous time. At the critical point the plane is 200′ above the ground and there is little margin for error. If you don’t see the ground you execute the missed approach. This is where the great user interface breaks down. If you do nothing the autopilot will fly the plane into the ground. In order to have it fly the missed approach the following must happen. After the final approach fix, but only after, you must press a button labeled Enable Missed Approach. At the decision height when you are 200′ above the ground you must either disconnect the autopilot and start the missed approach procedure manually or shift from ILS to FMS as the navigation source and press the VNAV button. I can hear people, including pilots, asking me what the big deal is. The big deal is that this is when you really want the automatic systems looking over your shoulder and helping out. If you forget to shift from ILS to FMS the plane will want to fly into the ground. That’s a very bad thing. The system is still great. Even at this moment it is much better than the old system. I am not saying I would want to go back. I am saying it could be better and that this operation doesn’t fit with how seamless the autopilot’s operation usually is. What the system should do is automatically arm the missed approach. I see no reason for this to be a required manual operation with the potential to be forgotten. The pilot should select the decision height at which the missed approach will begin to be executed. When that point is reach, if the autopilot has not been disconnected, the autopilot should start flying the missed approach including VNAV functionality. That includes shifting the navigation source from ILS to FMS automatically.  The result would be increased safety since the system wouldn’t be requiring command input from the pilot at a critical moment.

The discussion above relates to what I have been covering in this blog. As computing systems improve and move into every area of our lives, issues like the one above will pop up. Everything about the DFC-100 is vastly superior to the old STEC. The issue is consistency of use. As our computing systems get better and better user interfaces, minor inconsistencies will appear to us as big annoyances. Look at the iPad. If you think of it as an eBook reader that lets you view mail and surf the web it is an awesome device. If you look at it as a fun device with simple apps and games it is awesome. As soon as you want it to be your main computer, things like the lack of a user accessible directory structure become big. Compared to the old Newton or the PDA, the iPad and the iPhone are major advances. However, with this new capability comes raised expectations. Developers don’t get to do great things and then sit back. As soon as users get comfortable with the new, next great thing they begin to find annoyances. One of Apple’s strengths has been minimizing these annoyances but even on the best devices they are there. Consistency of user experience is a big deal. Getting there is tough. My point is that small details matter. How the icons look, how smooth the scrolling is, the animation when actions are taken are all small things that matter. One of the reasons for the success of the iPad and iPhone has been this consistency and sweating the details when it comes to the user interface. As we merge devices and functions in the post PC world it will be critical that these disruptions, the non-transparent use scenarios be identified and fixed.

I thought about making the title of this post “I’m Right – They’re Wrong.” While I like the cloud for data everywhere and for syncing of data, I don’t believe in data ONLY in the cloud. There has been a lot of press around putting everything in the cloud. The Chromebook is one attempt at this. On the surface, my techie side gets excited. I hear cheap, long battery life, one data set and a unified experience across devices. The major thing I hear is low upkeep. Someone else does most of the application updates and makes sure things work. This last part, however, sounded hauntingly familiar. Then it hit me. This was the promise of thin clients. A long time ago in a different computing world, thin clients were going to save companies lots of money. The clients themselves would be cheaper. Falling PC prices killed that as a major selling point. The second thing was ease and consistency of software maintenance. The problem was that the world went mobile. People couldn’t afford to lose software access when they weren’t on the corporate network. In the end thin clients failed. Fast forward to today. The same issues apply to the Chromebook. Why get a Chromebook when a netbook can do so much more? Then there is the issue of connectivity. What happens when there isn’t a WiFi hotspot around? Are you thinking 3/4G? Think again. Look at today’s data plans and their capped data. Most people can’t afford to have everything they type, every song they play, every picture they look at and every video clip they show go over the network. Local storage can solve some of this but then you have independent data and the programs to access that data on the local machine. In other words you are back to managing a PC again.

Currently I am visiting my sister in Mobile, AL. I realized I needed to freshen up my blog and waiting till I got back home would be too long. No problem I thought. I have my iPad with me and it will be a chance to learn the basics of Blogsy. That’s what I’m doing now but it has been an enlightening experience and is the genesis of this post. What you need to know is that my sister’s house lacks WiFi. Since she and her husband spend a lot of time traveling in their RV, they use a Verizon 4G modem plugged into their laptop. That works for them but it doesn’t help me unless I go sit on my brother-in-law’s laptop. Of course there’s no need for that since my iPad has 3G. Oops! One big problem – the connection is unreliable. Here I am in Mobile, AL, a few miles from their regional airport and I can’t get a reliable data connection. I could launch into an AT&T tirade but that would miss the bigger picture. Mobile, AL is a major city. If I have problems here then what about more remote places? What about other countries? What if I were using a Chromebook? Right now I am writing this post. I will upload it when I have a better connection. I just can’t see buying into a usage model that demands 24/7 connectivity. For that reason I have no desire for a Chromebook. The Chromebook will fail.

Transparency of use is still coming but it will happen in a way that takes into account the issues I have just raised. Apple’s iCloud will sync data and leave a copy on each device. Microsoft Mesh does the same. I still believe that a modified version of this together with the Chromebook approach will win in the end. The difference will be that the modified Chromebook (phonebook?, Plattbook?, iBook?) won’t connect to the internet directly but will be a peripheral device for the phone. Your phone will be your wallet and as such always with you. It will also be your primary data device. It will sync with other devices through the cloud and be backed up to the cloud but interactive data access will be to the phone.

I didn’t publish anything Monday or Tuesday. I was busy digesting what was coming out of WWDC and E3. I won’t regurgitate the standard stuff covered better by sites such as Engadget. Rather I want to comment on what people missed or only put down as a footnote. However, I do need to go through a few big items. First, WWDC is notable for no new hardware and few surprises. Little made me go WOW! The cloud is taking on more importance. However, a lot of this has already been done by Microsoft and Google. What Apple brings, get ready for it, is better transparency of use. If you are willing to buy into the Apple ecosystem then you get data transparency in return. The same goes for Microsoft and Google but the Apple approach is more automatic and, here is that word I overuse, transparent. This is an ecosystem war. Who gets left out? Well, I’n not buying any stock in RIM.

Everyone is talking about what the cloud will do. Here is what it won’t do. Right now no one has it syncing apps or present device status. That means when you move from one device to another you don’t just pick up where you left off. Your data will be there but you will have to open an appropriate application and load the data. If you don’t have an appropriate application installed, for example Excel, then well… it won’t be installed. So, all of the new stuff  coming out is a step in the right direction but just a step.

I liked Apple’s Match announcement but it just highlights the bandwidth limitations that make a pure cloud existence less than thrilling. The bandwidth issue is one of many reasons I am less than thrilled with Google’s Chromebook concept. Also, you will have to be careful with iCloud to make sure you really have your photos backed up since they only remain in the cloud for 30 days. In the end the cloud is great for syncing and sharing but i don’t want it to be my only data storage location.

As far as E3 is concerned there was the usual plethora of game announcements. On the hardware front Nintendo showed an early version of the Wii U complete with graphics generated on Xbox 360 and PS3. Yes, you read that correctly. Some of the example graphics were actually generated on competitor’s platforms. What’s notable about the Wii U isn’t the fact that it has a faster processor or 1080p graphics. The big deal is the new touch screen controller. It lets you play without using the TV set or if you do use the TV, have a second display. Wait, isn’t this just like what I was suggesting for Apple? Oh yeah, the Nintendo controller includes accelerometers and gyros just like an iPhone. Nintendo is on the right tract. However, Apple, Google and Microsoft are all coming from stronger positions if they will just see it and actually attack this space.

I have discussed the need for wireless charging on several occasions and mentioned a new ultrasonic technique here. One issue with the ultrasonic method is inefficient penetration of materials. For example, a simple mobile phone case has the potential to prevent charging. My version of transparency demands that the user have to do nothing to connect. It needs to occur well ummm…. transparently. I mentioned the problem with inductive charging being range and the need for a one meter range. Well, I was mistaken about the limitations of inductive charging. In fact, mistaken is an understatement. Back in 2007, Karalis, Joannopoulos, and Soljac published “Efficient wireless non-radiative mid-range energy transfer” in Annals of Physics. Apple was more observant than I am and picked up on this. The result is their patent “Wireless Power Utilization in a Local Computing Environment.” It describes wireless charging over about a one meter range. The near term impacts of this are small but the long range impact will almost certainly be huge. This isn’t a one off patent from Apple. They have already been looking at more typical very short range inductive charging solutions. For example patent 7352567, “Methods and apparatuses for docking a portable electronic device that has a planar like configuration and that operates in multiple orientations”, describes a wireless charging and data connection base for the iPad. It’s interesting for also including a wireless data connection in the base.

In terms of long range potential, imagine having your phone charged while you drive your car or while you sit at your desk at work or when you sit in your recliner at home watching TV. Freeing the phone from its battery limitations opens it up to become your primary computing device where you are able to rely on it always being there. This is huge. Google, Apple and Microsoft are all working on syncing through the cloud. However, that only goes so far. They are syncing data and not applications. Also, it will be a long time before truly high speed wireless data is everywhere.

Pioneer announced the AppRadio a few weeks back . You can read about it here. I got excited at first. I thought they had really integrated the iPhone into the radio. Instead it runs its own apps. I want it to display my iPhone on a screen in my car so that I run the apps on my iPhone. I don’t need yet another device to get apps for. I want to run the ones I already have on my iPhone.  I was kind to Mulally in my last post. That’s because I feel he is pushing Ford in the right direction. Still, will someone please “get it” that the phone is primary and the car should include an interface to the phone rather than duplicating the smartphone’s functionality?