Archive for June, 2011

Earlier I wrote about the need to instill certain core values into new college graduates, NCG’s. It is often during the training review that issues related to core values and expectations show up and where the initial reset occurs. One common issue is the belief that 99 is an A+. IN terns of the NCG, this has been the case for years. Consider a typical college engineering class project. A semiconductor design class might involve the design and physical layout of a small circuit. This must be done within class time constraints. The project will get reviewed to make sure the student understands the basic ideas. Small errors and deviations will be tolerated since time was limited and the student got the main point of the exercise. The business world is different and this is one of several places where expectations must be reset in the mind of the NCG.

To even get to the point of being able to reset expectations, without having prior corporate financial damage done, requires a well designed training program and, in particular, a well chosen training project. The project must be able to be completed in the training timeframe (5 to 7 weeks is a good range) but with some carefully chosen exceptions. In the past I have used an 8 bit register design project with specifications that were mostly easy to meet but with a few challenges. I will reference this project in several future posts since it has proven excellent at bringing out several common NCG issues.

Today’s topic involves meeting all, and I mean ALL, design targets. As the trainee chugs along through his design, most tasks come to completion in a straightforward fashion. He begins to feel that the main point is to show competency with the tools and that is certainly one point of the training exercise. However, there is typically one spec, be it speed or input levels, which is difficult to meet. OK, it isn’t doable for the average NCG. What I will describe next is a surprisingly common occurrence. During the review the NCG explains that he did his best and it’s close to spec. He explains that he met all other targets. The others in the room lower their heads. They know I am about to go on a rant. I start by giving an example of something most NCG’s can understand – buying that first new car. I ask him to imaging getting a new Accord, driving up to a stop sign, and finding out that the car won’t stop. He uses the hand brake to limp back to the dealer where he is surprised to find that the dealer thinks he should be happy. You see, the car is 99.9% perfect. The radio is fine, as are the seats, the engine and the environmental controls. The only issue is a bad seal in the brake system master cylinder. This is an A+ car! I then explain that just like he wouldn’t be happy with that car, our customers won’t be happy with our parts if they are only 99.9% correct. Consider a chip involved in routing data. If only 0.01% of the data gets corrupted, that’s a disaster. The only acceptable grade is 100.

If you have been following my bog (of course you have – right?), you might be thinking I am contradicting myself. Didn’t I say it was about just being good enough? I did and I still do. Read that older post carefully. It mentioned that quality is meeting customer expectations and needs. I wasn’t saying you could fail to meet the requirements. I said great engineering involved deeply understanding what the requirements really are and meeting them in a cost effective manner rather than brute force over engineering.

There are several other topics related to 99 being a failing grade and the training review scenario I have described above. I want to save some of these for later posts so they get their full due. However, I want to touch on the limits of engineering authority. Every now and then, and less than I would like to see, an NCG trainee circumvents my standard way of being able to relate the new car story and my ability to make the point that 99 is a failing grade. He does this in a very simple way. He discusses his inability to meet the target specification with his training supervisor and the supervisor tells him that the present performance is good enough. That is equivalent to getting marketing to change the target datasheet. For the majority of trainees that have not gotten this special dispensation, there is a second lesson, in addition to 99 being a failing grade, that they need to understand. They have exceeded their authority. In the real world, a company will put its name on the parts they design. Organizations will count on them. The company must be confident that changes, even if necessary, will only be done out in the open and with agreement from the corporate parties responsible for setting design targets. Working in partnership with others in the company is an important core value. It is important that we all know what our charters are and what they aren’t. That doesn’t mean being sheep. It does mean bringing issues out into the open and being part of the solution. For most reading this post my comments will sound obvious. To many NCG’s they aren’t.

Advertisement
Arriving at Mission Control

Arriving at Mission Control

Thursday was the big day. It was also a very full one. It started with getting to use the space shuttle simulator – the real one. For a pilot this is a big deal. It was for my kids too.

ANY CHARACTER HERE

Pictures of notable visitors

A key in case you can't recognize the faces.

ANY CHARACTER HERE

A few less notable people have preceded us.

ANY CHARACTER HERE

The full motion simulator

Learning how to strap in

ANY CHARACTER HERE

We first got to see one of the fixed simulators and then it was off to prepare for the full motion simulator. The first thing to do is to learn how to get buckled in and how to place the seat into the takeoff position. For takeoff the seatback is greater than 90 degrees. It feels weird at first.

ANY CHARACTER HERE

Here is video of my son Chris as the shuttle lifts off. The simulator actually leans you back about 80° so you are close to the position you would be in the real shuttle sitting on the launch pad.

Here  is video of Chris landing the shuttle.

Chris in the commander's seat

Michi takes the controls

Launch position

Michi landing

Cathe gets a shot

Paul gets to try

ANY CHARACTER HERE

At the end we got to sign the log and received our score sheets. If you want to see how we did click the link below. CHAP is my son Chris, MDP my daughter Michelle, PEP yours truly and Cathe was our host who graciously tried to make the rest of us look better.

shuttle landing score sheets

After flying the main full motion simulator it was time to tour Mission Control.

The old mission control from the Apollo era

In the 60's this would be a direct call to the White House

ISS Mission Control

Chris at the Flight Director station at Space Shuttle Mission Control

Progress had just docked with ISS. Inside Russian support.

There was a lot that we saw that I’m not posting here. There was the audio and video switching room with feeds from all over the world. That’s where they play the wake up music. There was the data backup facility. Every conversation and video feed is recorded. They were using VHS tapes! I will post a photo of one more area. We visited where they monitor the world wide network. Of particular interest to me was the equipment in the photo below:

They send data to remote sites and loop it back so they can measure bit error rate. That struck a chord since both Advanced Memory Buffer and sRIO work I have been involved with required extensive bit error rate testing. I was more familiar with the BERTscope.

ANY CHARACTER HERE

That’s it for the simulator and Mission Control.

Just about every company pays lip service to the value of hiring new college grads, NCG’s, but few do a great job utilizing this raw talent. To start with, few realize the true potential and why NCG hires are so important. They hire a few, sprinkle them around the company and are done. Some pay lip service to training but with training that is poorly targeted. Even fewer understand what they should be looking for in NCG’s.

Let’s start by understanding the fundamentals of hiring NCG’s and what to look for. To do this we need to look at what the downsides are to hiring experienced engineers. If your first thought is salary then you are way off base. When you go for experienced engineers you are rarely able to get the best people. The best people usually have golden handcuffs keeping them at their present company. You get the person below the best. He may be great. However, the interview often tells you less than you think. It is difficult to decide whether the candidate is the creative mind generating the great work on his resume or a following mind led by someone else. Knowledge is great but the ability to generate new knowledge and be creative in previously unexplored areas of engineering are what make great engineers.

Now consider hiring NCG’s. There are no golden handcuffs. They assume they will be relocating so they are very mobile and will go where you want them to. You stand a lot better chance of getting that brilliant, creative mind. You do have to aim you interview process in the proper direction. The interview questions must look at understanding and insight rather than memory work. I once got told that all you had to know to pass my interview was Ohm’s Law, Q=CV, and charge conservation. That’s a bit of a stretch but there is truth in it. I replied that to pass my interview you didn’t have to just know those items but they must be understood at a deep, inner, level to where they had become intuitive. Those people are rare, whether experienced or not. However, your best shot at hiring one is looking at NCG’s.

Once you have hired that rare, insightful mind, you need to make good use of it. That means training them, setting expectations, and generating the habits of great engineers. It does not mean throwing them onto a project and calling the hiring process done. What I have done in the past is put the NCG’s through a five to seven week  training program. A proper training program should accomplish several major goals. First, the student must learn the basics of the tools he will use. Secondly, he must come to understand the overall framework that a design process follows. This is much like a student taking general physics in college. The classes that follow, optics, mechanics, relativity, thermodynamics, etc. are elaborations of what was taught in general physics. The general physics class acts as a framework onto which the new knowledge is placed. In the engineering world it is important that it is always understood how individual processes and engineering steps fit into the overall design process. The third item that needs to be inculcated involves a number of attitudes that make up a great engineer. These attitudes are often overlooked. With experienced engineers they either have them or they don’t. With NCG’s you stand a good chance of forming a proper view of the engineering world. These attitudes are so important I will make them the subjects of separate posts. It’s just too much to include right now. It takes a lot to get these concepts ingrained but it starts with the first design review. The training program should involve designing a small part from start to finish. It should cover more than what the engineer will work on when on a real project. It is important that the NCG develop a sense of the issues confronting others and how his work will interface to their work product. The training, if successful, ends with a design review. It can be a bit brutal since it is meant to be a reality check. For that reason it is limited to senior staff who are intimately familiar with the goals of the training process. It is this review which introduces the engineer to concepts such as why 99% is a failing grade, the limits of his decision making authority, how an engineer manages his time and resources to prevent spinning his wheels, and how to properly run a design review so that the correct objectives are achieved.

After the design review has been successfully completed, the NCG is ready to become part of a design team and commence work on his first project. The project lead can extend the basics the NCG has learned during training and concentrate on bringing him along as an engineer. The group, having a common set of values and attitudes, is stronger and more functional. During the early days, Microsoft was built on this idea. It is key that everyone be well integrated into a common culture which elevates getting the job done in a correct fashion and discourages destructive behavior. Properly done, this culture brings out creativity rather than stifling it while keeping divergent activities in check. After all, the goal is to sell a product and make money. We’re talking engineering and not science.

There is a caveat here. Life is gray. It’s all about balance. I have been focusing on new college graduates. There is also a place for the experienced engineer. When to go for experience and how to select that engineer is a topic for another post.

I haven’t published in several days. It’s just been crazy. I have a friend who works at Mission Control at the Johnson Space Center, JSC,  in Houston. She has been suggesting for several months that I bring the kids down to get a behind the scenes tour of the facility. With the shuttle program shutting down, over 6,000 people in Houston and 20,000 nation wide will lose their jobs. My friend is slated to be one of the casualties. That meant it was now or never for the tour. Additionally, this past Thursday was the last chance to get on the full motion shuttle simulator. It’s scheduled to be torn down in a couple of weeks. As a pilot, I just couldn’t pass up a chance to fly to main shuttle simulator. The problem was weather. Pretty it wasn’t. In the end it meant leaving the house at 3:30AM last Wednesday to head to the airport. Headwinds were about 25 kts most of the way. We avoided the storms until right at the end. As I turned onto final for the ILS 35L approach at Ellington I was informed there was rain over the field. NEXRAD showed red over the field and I was about ready to head elsewhere but I was informed it was just heavy rain. When the 500′ callout happened I still couldn’t see the runway. I was thinking this was going to be a missed approach with a diversion. Then my daughter said she could see the runway and indeed I could too. Winds were gusty but manageable. We made it but we did get wet unloading the plane. I am glad I had gone up with an instructor and done five practice approaches just a week earlier. It was great having the DFC-100 autopilot in the plane.

We had arrived about 8:30 AM local time. We were tired but decided to not waste the rest of the day. We filled it up with a trip to Space Center Houston. Among other things there was a simple space shuttle simulator. That meant a chance to practice before trying the real simulator. It was humbling for me since both of my kids, Chris 14 and Michelle 10, pwned me. Here is Chris showing me how it should be done.

On a more general techie note, I was surprised at the use of QR codes. They were all over the Saturn V exhibit. As we were to find out later, not all of NASA is this up-to-date.

That’s a QR code at the bottom of the sign.

Thursday was the simulator. When I get the photos off the camera I will publish another update. Just a “heads up.” It was awesome.

 

I just finish talking to a friend and mentioned my last post. As we discussed it I realized it was about the individual engineer. Missing was how the concept of “good enough” affects business. That was a very big omission on my part. It’s time to fix that mistake.

When it comes to business, the idea of being “good enough” has a huge impact on decision making and the future of companies. This is best illustrated by discussing an area I am intimately familiar with – the semiconductor business. Imagine you are running the fictitious company GPScom. GPScom makes the world’s best GPS chips. They are WAAS and LAAS capable, very sensitive, and simply the best GPS chips by far. They own the personal navigation device, PND, market along with being in most other GPS based devices. However, companies like Broadcom and Qualcomm start producing mobile phone chipsets with relatively poor GPS receivers in them. These chipsets have only basic GPS circuitry that lacks not only LAAS but WAAS. They aren’t very sensitive and the GPS might say you are in the parking lot when you are really in a cafe having a bowl of soup. The problem is, these receivers are mostly good enough. With proper software the mobile phone becomes a decent navigation device. Furthermore, after a couple of generations, these phone chipsets have GPS receivers that are more than good enough. Since they are integrated into the mobile phone chipset they burn less power, take less precious board real estate and cost less. This means they do a better job of meeting the needs of the consumer. The PND market begins to fade as mobile phones take it over. This is happening today. GPScom engineers can tell themselves all day long how their circuitry is superior. The problem is that sales, and hence revenue, are going to Broadcom and Qualcomm. GPScom will have to either diversify, get acquired, or watch itself become less and less relevant until it fades into the sunset.

As an area of technology moves forward there gets to be a tipping point when what can be put into a CMOS chip is good enough. At that point the technology gets integrated and becomes part of a bigger solution. Companies that fail to recognize this risk becoming irrelevant. This is why the idea of “good enough” isn’t just about the individual engineer. It affects core business strategy. Every company needs to be worried that their area of core competency will evolve to the point where “good enough” marginalizes their value. These companies must either diversify so that they make the chips with the broader functionality, acquire technology that can be integrated in, or get acquired themselves. Is your company aware of just what “good enough” is when it comes to their specialty areas or does hubris cloud their thinking?

Cliches exist because they contain truth in an easy to digest form. There’s an old saying among engineers. “Anyone can build a bridge. It takes a good engineer to do it on time and under budget.”  That one holds the essence of why I consider good engineering more difficult to accomplish than good science. My formal training was as a scientist. I have been around scientific research in both the theoretical and experimental areas and I certainly appreciate the difficulties involved. However, it is the imposition of schedule and budget into engineering that makes it even more difficult than good science. Budget doesn’t just apply to the resources involved in the creation of the item but also involves the cost of manufacture. Great engineering means understanding “Just good enough.” Like many topics in this blog the concept of “Just good enough” is much broader and more important than many people think. It is related to the concept of quality. In his book Quality is Free, Philip Crosby defines quality as “conformance to requirements.” Great engineering meets the customer’s needs in the best manner. Best, in most cases, means finding a solution the customer can afford. For this reason designing a mid sized sedan like the Honda Accord is much more difficult than designing something like a Ferrari Italia. The Accord is in a much more competitive space and has tremendous budget constraints. If you want to upgrade the audio system then you have to find cost savings elsewhere. Many thousands of components have characteristics that must be traded off in order to meet the target price-point. The Ferrari design starts by asking “What’s best?” Just for fun, when it comes to the Accord, you get to layer on tougher customer expectations. The Accord isn’t a showpiece. It is a day-to-day working automobile and must perform perfectly for many years with few service needs. The Ferrari is expected to require some pampering. Even several year old Ferraris usually have just a few thousand miles on them. The Accord is a much tougher design challenge.

One engineer I admire is Steve Wozniak. If you look at the Apple II, the computer that made Apple a real company, you find many examples of awesome engineering. Again and again features are included and performance is achieved with elegant rather than brute force design. The result was a great combination of features at a reasonable price for its day. To highlight what I mean by “just good enough” I am going to single out just one of the many elegant design choices in the Apple II; but first I need to set the stage.

The personal computing era was kicked off in 1975 with the January issue of Popular Electronics. The cover article was on the construction of a computer kit called the MITS Altair 8800. With it came the introduction of the S100 bus. The Altair 8800 was a frame style design where cards were added to increase functionality. While many functions such as main memory have moved to the motherboard, we retain this expansion concept today although the S100 bus has mostly moved into history.

The Altair 8800 was copied by many companies and expanded upon. The S100 bus became an industry standard expansion bus. Lots of companies made cards for the S100 bus. Because of this a lot of computers placed only the basics on the motherboard in an effort to control price. There are problems with this approach. Since there was no game controller (joystick, paddle, buttons) functionality included in the Altair, there was no standardized game interface. I once looked at the cost of adding joysticks to an S100 based computer. The card alone was several hundred dollars. The approach involved expensive analog to digital converters (ADCs). The result was that only keyboard based games evolved for the S100 based machines.

During this time, games like Pong and Breakout were popular. It made sense to bring them to personal computers but they required interactive game controllers i.e. paddles or joysticks. A keyboard used as a controller lacked the same smooth interactivity. Using the keyboard for games was a compromise aimed at satisfying the engineers and accountants and not the customers but it was a compromise most computer manufacturers had adopted. Enter Apple and a few others. In 1977 Apple introduced the Apple II. It came with game paddles along with games like Breakout. To accomplish this in a cost effective manner, Wozniak pushed most of the design into software. Since he had designed Breakout in hardware for Atari, this was a big change in mindset. Great engineers adopt what is best as opposed to just reworking what they did in the past. Simplifying hardware and pushing complexity into software would turn out to be a very important trend. Here was that trend at a very early stage. Look at the schematic below.

This is part of the schematic of the Apple II included in the Apple II Reference Manual dated January 1978. What looks like a 553 integrated circuit (H13) is actually a 558. This is a quad version of the venerable 555 timer chip. The 558 is used to generate four paddle, or two joystick, inputs. Each paddle is just a variable resistor. Hooked into the 558, the resistance of the paddle controller determines the oscillation frequency of a simple RC oscillator. A loop in the code keeps reading the oscillator. The microprocessor can only read a 1 or a 0. If the voltage is above a certain level the microprocessor sees a 1. Below that it sees a 0. The Apple II loops while looking at the game paddle input. By looking at the pattern, for example 111000111000111000, it can determine the frequency of oscillation. This is then related to a game paddle position and the screen paddle is moved to the appropriate screen position. The beauty of this is that the paddle controller doesn’t have to be super linear. The paddles just need to be consistent i.e. all paddles need to act the same way. Nonlinearities can be corrected in software. To the user, using visual feedback as he looks at the screen while turning the paddle, this is all “just good enough.” It is also a high quality solution since it meets the user’s expectations and the requirements for playing games like Breakout. Including games and controllers gave the Apple II great consumer appeal and was a big part of its success and with it the success of Apple Computer.

Today we often see companies just iterating on a theme. These are the so-so companies. Great companies sit back, look at the bigger picture and think about possibilities. Rather than layering expensive, iterative solutions on each other, the great companies rethink the approach and create solutions that are cost effective while meeting user requirements. Exceptional companies go beyond this and create solutions to user requirements that the user didn’t know he had. That, however, is a topic for another post.

While I was out flying today the discussion moved to how things should work. As I posted earlier, the initial discussion was about autopilot operation. It later turned to meetings and how they will change in the future. I have talked about the phone as the primary computing device. I want to outline how this will merge with the future meeting room. You are at your desk working on material for the meeting when the clock becomes your enemy and it is time to go. You are able to work right up to the last minute because your work will move with you. You stand up and the screen on your desk goes dark. You walk to the boardroom and sit down. In front of you a screen, keyboard and mouse become active. You are back where you were. There is one addition. The conference room is on its own subnet. When you sat down several things happened. Inductive circuitry in the chair is charging your phone. A short range link has connected the screen, keyboard, and mouse. The room is on its own small subnet. as you sit down you are connected and contact information and picture are collected. If you are a company member you are connected back to the main network. If you are a visitor it is a guest network which allows internet access but keeps the internal network isolated. On your screen you see a graphic of the meeting room table. At each location is a picture along with the name of the person sitting at that position. A click on the image reveals the information on a standard business card. During the meeting Bill asks if you have received the latest proposal from legal. He needs to see it when you are done. You say you have received it and have finished marking it up. You drag it to Bill’s image on the conference room graphic and a copy is sent to Bill. Now it’s your turn to present. Fortunately you ready. A simple click and your presentation is on the large display. A click on your tablet brings up the presentation complete with speaker notes. As you stand up the screen on the meeting room table goes blank and the phone isn’t being charged but you are still on the meeting room network and your presentation is still displayed on the large screen. You move seamlessly between devices and use the one best suited for the moment.

A few days later there is a meeting of a different kind. It’s a late night conference call. Hey, that comes with being part of a central support organization for a company with operations in China. You sit down at a desk and start the video call. Like the boardroom example you are able to transfer files by dragging and dropping them to the picture of someone on the call. This scenario is pretty much here today. What needs to be done is to make the user interface more transparent when being used but the conference call scenario is pretty much here. With cameras now standard on both PC’s and tablets expect video conferencing to increase a lot over the next two years. One new addition will be the capability to seamlessly transfer the call from device to device. A call might be started on your phone. as you walk into your office it would transfer to the large screen on your desk. A personal call might start out on your TV but be transferred to your phone as you head out. In your car you wouldn’t have video but that would be on your phone when you got to your destination.

 

 

I’m back home and connected. Yeah! My kids are happy since World of Warcraft now works well. I’m trying to catch up and realized I haven’t posted in several days. Next week won’t be any better since I will be heading to Houston for a behind the scenes tour of Mission Control. I hope that trip is as much fun as I expect it will be.

Now to the techie stuff. I was flying today and the conversation turned to how things should work vs. how they really work. Of course the initial topic was about flying. I was working through approach procedures using a new autopilot. I fly a Cirrus SR22 equipped with Avidyne R9 avionics. Recently the autopilot was upgraded from the STEC 55X to the Avidyne DFC-100. This is a big upgrade. The STEC understood rate of turn (from a turn coordinator), altitude (air pressure sensor), course error (from Horizontal Situation Indicator), and GPS course. The new autopilot receives input from the GPS, Flight Management System and the Air Data Attitude Heading Reference System. In other words it knows just about everything about the airplane and its condition. It even knows flap position and engine power. The end result is a vastly superior autopilot. Sequencing is automatic (most times – see below). You can put in a flight profile and the plane will fly it including climbs and descents. The operation is very intuitive and a great example of intelligent user interface design. If you are climbing at a fixed IAS (Indicated AirSpeed) and set up to lock onto a fixed altitude the IAS button is green to show it is active and the ALT button is blue to show it is enabled but not locked. When you get to the desired altitude the ALT light blinks green and then goes steady green when locked onto the desired altitude. I could go on and on about how great this is and if you have questions just ask.

Now to more specifics about interface design. When you use the DFC-100 autopilot to fly an instrument landing system, ILS, approach, it is very automatic. If you punch VNAV, vertical navigation, you can  have the autopilot fly the entire procedure including the appropriate altitudes. When the radio signal of the ILS is received and verified correct (all automatic) the system shifts to using the electronic ILS pathway to the runway. So far everything has been very automatic. If you exit the clouds and see the runway you disconnect the autopilot and land. The problem comes when the clouds are too low to see the runway even when you are close and down low. This is a very dangerous time. At the critical point the plane is 200′ above the ground and there is little margin for error. If you don’t see the ground you execute the missed approach. This is where the great user interface breaks down. If you do nothing the autopilot will fly the plane into the ground. In order to have it fly the missed approach the following must happen. After the final approach fix, but only after, you must press a button labeled Enable Missed Approach. At the decision height when you are 200′ above the ground you must either disconnect the autopilot and start the missed approach procedure manually or shift from ILS to FMS as the navigation source and press the VNAV button. I can hear people, including pilots, asking me what the big deal is. The big deal is that this is when you really want the automatic systems looking over your shoulder and helping out. If you forget to shift from ILS to FMS the plane will want to fly into the ground. That’s a very bad thing. The system is still great. Even at this moment it is much better than the old system. I am not saying I would want to go back. I am saying it could be better and that this operation doesn’t fit with how seamless the autopilot’s operation usually is. What the system should do is automatically arm the missed approach. I see no reason for this to be a required manual operation with the potential to be forgotten. The pilot should select the decision height at which the missed approach will begin to be executed. When that point is reach, if the autopilot has not been disconnected, the autopilot should start flying the missed approach including VNAV functionality. That includes shifting the navigation source from ILS to FMS automatically.  The result would be increased safety since the system wouldn’t be requiring command input from the pilot at a critical moment.

The discussion above relates to what I have been covering in this blog. As computing systems improve and move into every area of our lives, issues like the one above will pop up. Everything about the DFC-100 is vastly superior to the old STEC. The issue is consistency of use. As our computing systems get better and better user interfaces, minor inconsistencies will appear to us as big annoyances. Look at the iPad. If you think of it as an eBook reader that lets you view mail and surf the web it is an awesome device. If you look at it as a fun device with simple apps and games it is awesome. As soon as you want it to be your main computer, things like the lack of a user accessible directory structure become big. Compared to the old Newton or the PDA, the iPad and the iPhone are major advances. However, with this new capability comes raised expectations. Developers don’t get to do great things and then sit back. As soon as users get comfortable with the new, next great thing they begin to find annoyances. One of Apple’s strengths has been minimizing these annoyances but even on the best devices they are there. Consistency of user experience is a big deal. Getting there is tough. My point is that small details matter. How the icons look, how smooth the scrolling is, the animation when actions are taken are all small things that matter. One of the reasons for the success of the iPad and iPhone has been this consistency and sweating the details when it comes to the user interface. As we merge devices and functions in the post PC world it will be critical that these disruptions, the non-transparent use scenarios be identified and fixed.

I thought about making the title of this post “I’m Right – They’re Wrong.” While I like the cloud for data everywhere and for syncing of data, I don’t believe in data ONLY in the cloud. There has been a lot of press around putting everything in the cloud. The Chromebook is one attempt at this. On the surface, my techie side gets excited. I hear cheap, long battery life, one data set and a unified experience across devices. The major thing I hear is low upkeep. Someone else does most of the application updates and makes sure things work. This last part, however, sounded hauntingly familiar. Then it hit me. This was the promise of thin clients. A long time ago in a different computing world, thin clients were going to save companies lots of money. The clients themselves would be cheaper. Falling PC prices killed that as a major selling point. The second thing was ease and consistency of software maintenance. The problem was that the world went mobile. People couldn’t afford to lose software access when they weren’t on the corporate network. In the end thin clients failed. Fast forward to today. The same issues apply to the Chromebook. Why get a Chromebook when a netbook can do so much more? Then there is the issue of connectivity. What happens when there isn’t a WiFi hotspot around? Are you thinking 3/4G? Think again. Look at today’s data plans and their capped data. Most people can’t afford to have everything they type, every song they play, every picture they look at and every video clip they show go over the network. Local storage can solve some of this but then you have independent data and the programs to access that data on the local machine. In other words you are back to managing a PC again.

Currently I am visiting my sister in Mobile, AL. I realized I needed to freshen up my blog and waiting till I got back home would be too long. No problem I thought. I have my iPad with me and it will be a chance to learn the basics of Blogsy. That’s what I’m doing now but it has been an enlightening experience and is the genesis of this post. What you need to know is that my sister’s house lacks WiFi. Since she and her husband spend a lot of time traveling in their RV, they use a Verizon 4G modem plugged into their laptop. That works for them but it doesn’t help me unless I go sit on my brother-in-law’s laptop. Of course there’s no need for that since my iPad has 3G. Oops! One big problem – the connection is unreliable. Here I am in Mobile, AL, a few miles from their regional airport and I can’t get a reliable data connection. I could launch into an AT&T tirade but that would miss the bigger picture. Mobile, AL is a major city. If I have problems here then what about more remote places? What about other countries? What if I were using a Chromebook? Right now I am writing this post. I will upload it when I have a better connection. I just can’t see buying into a usage model that demands 24/7 connectivity. For that reason I have no desire for a Chromebook. The Chromebook will fail.

Transparency of use is still coming but it will happen in a way that takes into account the issues I have just raised. Apple’s iCloud will sync data and leave a copy on each device. Microsoft Mesh does the same. I still believe that a modified version of this together with the Chromebook approach will win in the end. The difference will be that the modified Chromebook (phonebook?, Plattbook?, iBook?) won’t connect to the internet directly but will be a peripheral device for the phone. Your phone will be your wallet and as such always with you. It will also be your primary data device. It will sync with other devices through the cloud and be backed up to the cloud but interactive data access will be to the phone.

Firemint has announced a dual screen capability for Real Racing 2 HD which uses AirPlay mirroring in iOS5 to show a race car on your TV (via Apple TV) while status information is on your iPad. The iPad acts as the controller. This is a bit similar to what Nintendo is showing at E3 for their Wii U. However, what I don’t see is multiplayer. Also, the iPad is running the game. Apple TV is just acting as a display device. This isn’t as complete as where Nintendo is heading but I see no reason it can’t be. Apple just has to make the Apple TV a gaming platform. Come on Apple. The hooks are there.