Posts Tagged ‘AND’

A friend sent me a link to an article on changes coming in microprocessors. The article is The Lifer: Why Your Core i7 Processor May Be Obsolete Sooner Than You Think. It got me thinking about writing this post not because the article has any great insight but because of the opposite. The article is too shallow.

One of the topics mentioned is specialized computing. This is nothing new. While it wasn’t the beginning, many people may remember the Intel 8087 floating point coprocessor that offloaded the 8086. Earlier there was the less well know 8031A. I have linked to a copy of the datasheet if you want to see how things used to be. The 8031A paired with the 8080 microprocessor. Interestingly, considering the two companies today, the 8031 and 8031A were licensed versions of AMD’s   AM9511 and AM9511A introduced in 1977. Today, we take it for granted that this floating point capability is built into the processors we use.

Throughout computing history the research agencies have driven the need for large, somewhat specialized computers. From the CDC 6600 (1964), to the Cray 1 (1976), to Nebulae (2010) floating point performance has driven a class of supercomputers designed for scientific and military research.  Originally these designs employed vector processors. Today, machines like the Nebulae use off the shelf graphics processors as general purpose computing engines (GPGPU). In particular, nVidia has started marketing to this area. The problem is that modern GPU’s are basically SIMD machines and bring along many of the limitations found with a SIMD architecture. Working with the limitations of SIMD and mitigating those limitations is a big topic with a large body of work so I won’t address it in depth here. For restricted problems such as graphics rendering it is a very effective approach. At the top end, the AMD 6990 graphics card contains two processor chips which together yield 3072 Stream Processors, 192 Texture Units, 128 Z/Stencil ROP Units, and 64 Color ROP Units. For graphics rendering this gives amazing performance. What it is not good at is general computing. In summary, specialized computing is nothing new and has been with us for a long time. Massively parallel specialized computing is here today.

Myslewski talks about large numbers of general purpose computing cores. We have made great progress utilizing four core and even eight core system. There are restricted problems such as design rule verification of large chip designs which are amenable to massively parallel systems. However, general purpose computing has trouble utilizing even four cores effectively. More interesting than the straight forward approach Myslewski mentions are approaches which reconsider the very nature of what a processor is. I have been thinking about this lately after watching a talk by Steve Teig of Tabula.

http://www.c-eda.org/IEEE-CEDA-DAC-061510/IEEE-CEDA-DAC-061510.html

Steve mentions Haskell as a language of choice. This is a transition that is needed and is fundamental. We currently force fit a one CPU ecosystem onto multiCPU processors. We patch language structures and manually work to make task division successful. In graphics this is somewhat straight forward. You tell the different cores “Care 1 you work on this area of the scene, core 2 you work over here, core 3 …” Except for specialized areas such as graphics, this model does not fit what we do today when we get beyond four cores. Right now we can, at a very simplistic level, say, “Core 1 you handle operating system commands, core 2 you run the program, core 3 you take care of the anti virus background tasks, core 4…” What is wrong here is the process and mindset itself. That’s why Steve mentions Haskell. The mental process I just outlined is forcing the code onto the processor. What is needed is a new paradigm of code as architecture. I am not talking about the Tensilica approach but something closer to the work discussed here. If you read through the various papers you will see a common theme related to the problem of limited FPGA size. The idea of time as a third dimension opens the door to a possible solution. What needs to be worked out is an interface that gets around the von Neumann memory bottleneck and allows continuous reconfiguration of the FPGA. Once that is achieved, arbitrarily large code can be executed with a three dimensional FPGA (X, Y, time) as the direct instantiation of the code. For an example of this type of FPGA check out Tabula. Be careful to not get lost in the hardware although that is a key component. The main advantage of the hardware is the ability to latch its state and rapidly reconfigure. More important that that functionality is compiling down into the FPGA in a way which allows a mapping of code to circuitry that continuously reconfigures as code is executed rather than execute, save state, load code, reconfigure, execute. Let me know what you think of the concept of code as architecture.

Advertisement

I mentioned that Mango showed that Microsoft could come on strong once they recognized they were behind. I saw a few unexpected features in Mango and it gave me hope that Microsoft was still in the game if very far behind. However, with the release of more information about Windows 8, I am truly surprised. Microsoft really gets it. They see the need for a unified OS across platforms and for a transparent user experience. Furthermore, Microsoft is using its strength on the desktop to leverage itself into the tablet and phone space. This isn’t my pick for the easiest path in general but it is the easiest and best way for Microsoft. More than other releases, Windows 8 will be about an aggressive business strategy. I love it when business, the consumer, and engineering mesh at such an intimate level.

Windows 8 is important on several levels. First, let’s start with the fact that it will not only run on X86 CPU’s but on ARM. Wow! Let that sink in. This means Windows on a CPU that isn’t compatible with the Intel X86 architecture. There will be no emulation layer so current X86 apps won’t run on ARM based hardware. However, this is important in and of itself. Microsoft will be encouraging developers writing lighter apps to write in Java and HTML5 so the apps will be independent of the CPU used. Add this to Apple toying with the idea of an ARM based MacBook Air and you know why Intel is nervous.

The next surprise is the breadth of Windows 8. It is really a tablet  OS where the mouse and keyboard can substitute for touch. You read that correctly. The OS is, in many ways, a tablet OS first and a desktop OS second. This doesn’t mean a compromised desktop OS. What it does mean is an OS with touch infused throughout.  The same OS will run on tablets, laptops and desktops.

They say a picture is worth a thousand words and the next surprise is best illustrated with a couple of pictures. Here is one of Windows 8 on a PC:

Next I have a picture of the home screen from a phone running Windows Phone.

Do you see what I am excited about? Just like Apple, Microsoft is making the desktop OS look and feel like the phone OS. Do you believe me now when I talk about the push for transparency of the computing experience? Now go back to the comment above about Microsoft pushing for apps written in HTML5 and Java. Those will be easy to port to Windows Phone and vice versa. Microsoft may be late but they are coming on strong.

What does this mean on the business side? Obviously the push onto ARM is a threat to Intel and AMD. In terms of the other hardware and software players here is how I see it. RIM is in an increasingly bad position. They have zero desktop presence and Microsoft is stronger in the corporate world than RIM. Windows 8 might seem independent of RIM’s Blackberry world but, in actuality, it has the potential to do great damage. HP may take a hit too. They are betting a lot on WebOS. I don’t see what the value add is for WebOS. Call this one more wait and see but be skeptical. HP could quickly shift to being Windows 8 centric if need be. Heck, they are Windows centric today.  Apple probably fairs OK in the near term. Longer term they might lose some of their momentum. However, I see Apple as the best positioned against Windows 8 if they can continue to move towards merging iOS and OSX. I’m still very strong on Apple. Next up for Apple is iOS 5 and iCloud which will be announced next week. Windows 8 could be problematic for Google. I have trouble believing in Chrome as a desktop OS. Google will still be ahead in the TV space but compared to Microsoft and Apple they lack the desktop. Android is the largest selling smartphone OS and we are about to be inundated with Android tablets including some excellent ones such as the Samsung 10.1. I still see Microsoft being behind Google but it is a lot more interesting than it was a day ago. Apple just made iWork available on the iPhone in addition to the iPad and OSX devices. Microsoft will have Office running across all devices. Will people buy into Google’s idea that web based solutions are the best answer for their productivity apps? People may but only if Microsoft screws things up. Then again, Microsoft mucked things up in the past with poorly conceived products like Works.

It may seem like I have been mostly regurgitating news. Look deeper. I am trying to point out the trends of convergence and transparency and how they are reaching everywhere. On the surface Google Wallet is a nice tweak to how you pay for what you buy. In terms of those affected it is easy to see the retailers, banks and credit card companies. If you look on the surface at semiconductor companies you might just think about those chips which enable NFC. This is part of something much bigger that affects many more companies. NFC services like Google Wallet will make transactions more transparent i.e. easier and more convenient. They also converge services into the phone and continue pushing the phone towards becoming your dominant computing platform. This is what I started this blog off with. It doesn’t matter if Google Wallet in it’s present form becomes big or not. It’s a symptom of a larger movement. No matter what business you are in you need to evaluate your strategy with convergence and transparency in mind. How will your business play out when the phone is the dominant computing platform? Intel and AMD are reacting to this today. For once the interests of AMD and Intel are aligned. They need to bring the X86 architecture to tablets and then mobile phones. Microsoft is also reacting as they worry about Windows being marginalized. Think how differently this would have been had the iPhone and iPad been based on the Atom processor. For the other chip companies there is the increasing importance of LTE and the cloud. Flash memory will continue to be pushed to grow in density and decrease in price. The world is moving towards one gigabyte of storage in the phone. Remember reading about how over built the global network is? Think again. OLED screens will finally become a mainstream technology driven by the phone. Eventually they will grow to be the dominant technology in both laptops and TV’s.  This shift affects media. The RIAA and  MPAA continue their vain attempts at protecting intellectual property rather than embracing the technology trends and profiting from them. That’s an entire blog (or two or three) in and of itself. Is your company preparing for the upcoming changes? More importantly, have you looked deep to see how convergence and transparency will change your business landscape?