Posts Tagged ‘iPad’

I’m back home and connected. Yeah! My kids are happy since World of Warcraft now works well. I’m trying to catch up and realized I haven’t posted in several days. Next week won’t be any better since I will be heading to Houston for a behind the scenes tour of Mission Control. I hope that trip is as much fun as I expect it will be.

Now to the techie stuff. I was flying today and the conversation turned to how things should work vs. how they really work. Of course the initial topic was about flying. I was working through approach procedures using a new autopilot. I fly a Cirrus SR22 equipped with Avidyne R9 avionics. Recently the autopilot was upgraded from the STEC 55X to the Avidyne DFC-100. This is a big upgrade. The STEC understood rate of turn (from a turn coordinator), altitude (air pressure sensor), course error (from Horizontal Situation Indicator), and GPS course. The new autopilot receives input from the GPS, Flight Management System and the Air Data Attitude Heading Reference System. In other words it knows just about everything about the airplane and its condition. It even knows flap position and engine power. The end result is a vastly superior autopilot. Sequencing is automatic (most times – see below). You can put in a flight profile and the plane will fly it including climbs and descents. The operation is very intuitive and a great example of intelligent user interface design. If you are climbing at a fixed IAS (Indicated AirSpeed) and set up to lock onto a fixed altitude the IAS button is green to show it is active and the ALT button is blue to show it is enabled but not locked. When you get to the desired altitude the ALT light blinks green and then goes steady green when locked onto the desired altitude. I could go on and on about how great this is and if you have questions just ask.

Now to more specifics about interface design. When you use the DFC-100 autopilot to fly an instrument landing system, ILS, approach, it is very automatic. If you punch VNAV, vertical navigation, you can  have the autopilot fly the entire procedure including the appropriate altitudes. When the radio signal of the ILS is received and verified correct (all automatic) the system shifts to using the electronic ILS pathway to the runway. So far everything has been very automatic. If you exit the clouds and see the runway you disconnect the autopilot and land. The problem comes when the clouds are too low to see the runway even when you are close and down low. This is a very dangerous time. At the critical point the plane is 200′ above the ground and there is little margin for error. If you don’t see the ground you execute the missed approach. This is where the great user interface breaks down. If you do nothing the autopilot will fly the plane into the ground. In order to have it fly the missed approach the following must happen. After the final approach fix, but only after, you must press a button labeled Enable Missed Approach. At the decision height when you are 200′ above the ground you must either disconnect the autopilot and start the missed approach procedure manually or shift from ILS to FMS as the navigation source and press the VNAV button. I can hear people, including pilots, asking me what the big deal is. The big deal is that this is when you really want the automatic systems looking over your shoulder and helping out. If you forget to shift from ILS to FMS the plane will want to fly into the ground. That’s a very bad thing. The system is still great. Even at this moment it is much better than the old system. I am not saying I would want to go back. I am saying it could be better and that this operation doesn’t fit with how seamless the autopilot’s operation usually is. What the system should do is automatically arm the missed approach. I see no reason for this to be a required manual operation with the potential to be forgotten. The pilot should select the decision height at which the missed approach will begin to be executed. When that point is reach, if the autopilot has not been disconnected, the autopilot should start flying the missed approach including VNAV functionality. That includes shifting the navigation source from ILS to FMS automatically.  The result would be increased safety since the system wouldn’t be requiring command input from the pilot at a critical moment.

The discussion above relates to what I have been covering in this blog. As computing systems improve and move into every area of our lives, issues like the one above will pop up. Everything about the DFC-100 is vastly superior to the old STEC. The issue is consistency of use. As our computing systems get better and better user interfaces, minor inconsistencies will appear to us as big annoyances. Look at the iPad. If you think of it as an eBook reader that lets you view mail and surf the web it is an awesome device. If you look at it as a fun device with simple apps and games it is awesome. As soon as you want it to be your main computer, things like the lack of a user accessible directory structure become big. Compared to the old Newton or the PDA, the iPad and the iPhone are major advances. However, with this new capability comes raised expectations. Developers don’t get to do great things and then sit back. As soon as users get comfortable with the new, next great thing they begin to find annoyances. One of Apple’s strengths has been minimizing these annoyances but even on the best devices they are there. Consistency of user experience is a big deal. Getting there is tough. My point is that small details matter. How the icons look, how smooth the scrolling is, the animation when actions are taken are all small things that matter. One of the reasons for the success of the iPad and iPhone has been this consistency and sweating the details when it comes to the user interface. As we merge devices and functions in the post PC world it will be critical that these disruptions, the non-transparent use scenarios be identified and fixed.

Advertisement

At WWDC Apple announced an improved AirPlay in iOS5. I have broken this out for a separate post because it has gotten little attention from the mainstream press and has huge near and long term implications. The key new feature to focus on is AirPlay mirroring. In the near term this is all about corporate penetration. Mirroring works on the iPad 2 and allows you to display the screen on a separate device; for example a TV with Apple TV attached. This is another step towards using the iPad as a presentation device. All that is needed is a wireless receiver that can be hooked to the projectors now standard in corporate meeting rooms. That would allow cordless mobility using the iPad as a small, easy to hold, presentation device. There is a lot of near term potential here. This is about way more than a few extra iPad sales. Apple has always been viewed as a consumer company. The iPad is changing that and the result is big. RIM had the iPhone locked out of the corporate market. Recent security improvements on the iPhone together with the iPad being adopted in the corporate market has changed that. The result is that RINM is losing its hold on the corporate world. Driving the iPad deeper into the corporate world will extend this and prevent the Playbook from getting traction. The iPad has the potential to be the de facto corporate presentation device. Apple just needs to listen to me and make the wireless battery powered AirPlay display adapter. Throw in transparent collaborative syncing of files and corporate presentations just got a lot easier and slicker.

In the long term AirPlay mirroring takes on even greater importance in an entirely different way. First, you have to move AirPlay mirroring to the phone. Then add in a data link over Bluetooth. What you now have is the ability to merge the phone completely into the automobile. This will take a lot of work to be done in a way that is clean and aids rather than distracts the driver. As a simple example, however, imagine playing movies stored on your phone on a display in the car. Another example would be using the GPS and navigation software in your phone to display a map and directions on the display in your car along with voice guidance through the car’s audio system. Commands would be given through controls on the steering wheel and voice commands. This is a small but important step towards making the phone the dominant computing platform by a wide margin.

In looking back at my comments on transparency I realize I might be giving the wrong impression. Data transparency is moving along. However, creating software that makes moving from one device to another transparent is very hard to accomplish. It involves making the transition from one UI to another transparent or, coming from the other direction, involves making one UI work across several devices. Consider where this has been most successfully done i.e. the iPad and iPhone. Both the iPhone and the iPad use the same OS with the same UI. However, in accomplishing this, the tablet version of iOS has been hindered by the need to work well on the much smaller screen of the iPhone. Google has taken a different approach. Their tablet version of Android is noticeably different from the phone version. The result is that Apple and Google are coming at the problem from opposite directions. With Ice Cream Sandwich, Google will try to unify the tablet and phone experience and thereby improve transparency when moving between the devices. Apple, with iOS 5, will try to bring better functionality to iOS so their tablet offering is less restricted by the OS and is able to come closer to the functionality of a full blown laptop.

But tablets and phones are closely related. Any problems encountered while working to achieve transparency between the two pale compared to merging the experience with other devices. Consider the TV set. It lacks a touch screen. Any keyboard linked to the TV set will almost certainly be less than full sized. It would be easy to claim that the TV is fundamentally different and to forget transparency altogether.  That would be a mistake. There is too much money at stake to take the easy way out. Here is where human factors specialists will have to shine. They will have to craft a different OS interface than that on a phone or tablet but one which feels very similar to the phone OS interface such that knowing the phone OS takes the TV OS learning curve to zero.The same will be true when looking at the man-machine interface in the automobile. Here, minimizing driver distraction will be the main goal. There are two aspects to driver distraction and they will sometimes work against each other. On the one hand, making the automobile UI look like the phone UI will allow use to be more second nature and thereby require less conscious thought on the part of the driver. The problem is that a phone OS will sometimes require that the driver look away from the road. That’s not good. A compromise will have to be reached. Like the TV, there will be other ways of interfacing to the UI than just a touch screen. There will be voice, steering wheel controls and probably a mouse like device similar to what BMW uses on its infamous iDrive.

This high difficulty level extends to applications. I really like those iTunes store apps that have the plus sign next to the price. You pay once and you get something that works on both your iPhone and iPad. I hate paying twice for an app just because I want it on my iPhone and my iPad. Down the road I want to buy an app once and use it on whatever device is handy. This trend is already in full play. Just take a look at Steam. When you buy a program through Steam, you can download and install it on any machine you like. You log into your Steam account to gain access. The main problem here is that you have to download a large program for each device but that could be easily automated. Also, as more programs exist in the cloud, this will be less of an issue. Already some programs allow the user to start playing while sections of code not in use continue to download. The big problem is interface design. Imagine making Crysis work on everything from a phone to a TV set. That’s not easy. It is particularly difficult if the user wants to pick up a game in progress on one device, say a TV, where he left off on another such as his phone. To get an idea of the scope of the problem look at Foreflight. This is a great aviation app with an excellent user interface. However, their iPhone and iPad apps are two completely different animals. With Foreflight this isn’t too big of an issue since the apps are free and the database subscription allows use on an iPhone and an iPad at the same time. The photos below show how different the interface has to be because of screen size.

The iPad version of Foreflight allows selecting different pages from any current page. Look at the bottom of the picture below.

Sections like AIRPORTS and DOWNLOADS can be selected on the bottom. Now look at these screen captures from the iPhone app. The first shows the page equivalent to the iPad page shown above.

Here you select CLOSE which takes you to this page:

Now you can select the page you want.

I am not picking on Foreflight. Rather, I am highlighting what they have done as an example of adapting to the different screen sizes of the two devices while maintaining a lot of the same feel so that the learning curve is low. However, this is the easy part. Adapting to TV, car, laptop etc. will be a lot more difficult. If you are a pilot you might be wondering why anyone would want to use this program across so many devices. But, if transparent use and data is really achieved then imagine the following scenario. You are watching TV with some friends when the talk turns to playing golf on Hilton Head the next day. You bring up Foreflight on the TV and flight plan the trip to find out the flight time and take an initial look at weather. The next morning you quickly update and file the flight plan using your laptop. On the way to the airport you notice that the morning clouds haven’t burned off as expected so you decide to check the current airport weather. Your car interfaces into the Foreflight app on your phone and you are able to bring up the weather. In the air passengers follow the flight’s progress on the iPad using the same program. You have bought one application and used it across numerous devices. It feels easy and natural to do but it was only easy for you. For the developers it was a tough task. They had to sweat the user interface and how it would appear on different devices. The device manufacturers had to sweat the user interface of the OS to make sure this transparent usage would , well…, really be transparent.

Xoom

Posted: April 21, 2011 in Motorola, Tablet
Tags: , , , ,

I played with the Motorola Xoom the other day. It’s a nice device and not to be taken lightly. Apple may dominate tablets right now but the Xoom shows that the Android threat is real. The UI feels more advanced than he iPad without having the scattered and cluttered feel of Windows. Operation was smooth and the tablet felt fast. My take away isn’t that the Xoom is the ultimate tablet but rather that Android tablets will get traction and dent the present iPad monopoly.This is in contrast to the Blackberry Playbook which I don’t see gaining traction. I was, however,  taken aback by a comment form a friend. He pointed out that the UI is different from an Android phone and lacks some of the simplicity of iOS on the iPad. I found the Xoom UI great but I am not your typical general public user. Moving from the iPhone to the iPad is transparent to the user. Hmm, I think that’s a big deal. The problem isn’t too many devices (phone, tablet, TV, PC) but rather too many user interfaces.

Why the iPad is Like Texting

Posted: April 15, 2011 in Apple, iPad
Tags: , ,

I have an iPad. I love it. When i was thinking of getting one my son said “But Dad, it’s just an iPhone that costs more and can’t make phone calls. “ He’s right. So why do I love it? Why do millions of people love it? The answer is because it’s like texting. Think of texting. What purpose does it serve? If you need immediate communication then call. If you want to write something then email. By just about any same bit of logic few people should ever text. But, people do. The answer lies in balance. Texting is more immediate than email but less intrusive than a phone call.  The iPad is like that. It’s more portable than a laptop but with a bigger screen than a phone. Neither texting nor the tablet should be popular but they are because of balance. Texting is the perfect compromise between a phone call and an email. The tablet is the perfect compromise between a phone and a laptop. Just a thought.