Tag Archives: CPU

4K Video’s significant impact on; info consumers, platform vendors and content providers

As we start to see the uptake in 4K video content, suppliers of CPUs, NIC (Network Interface Cards), networks (LAN, WLAN, Wi-Fi) and storage technologies will all be struggling to “step up to the plate” in meeting the challenges of this disruptive Video format.  Also IAAS platform providers will face huge challenges to configure cloud components that can be rapidly provisioned  for 4K content or Video Streaming.  Even the security industry will be affected regarding the video surveillance infrastructure (see this Video Security Magazine article).


This is a Technologies Strategic Directions “Sleeping Inflection Point” for multiple industries, manufacturers, eworkers  and information consumers.

Ultra-high definition (UHD) resolution is 3840×2160 Pixels now used in displays and broadcast., This does not equal 4K (4096×2160 Pixels) used in digital cinema. People tend to used them interchangeably but there is a significant difference in impact on the networking bandwidth required to service consumption of 4K.

We all are aware from a display technology perspective that TVs are now offering this content.  However, how about other network and computer infrastructure components?  When will they be able to handle  the disruptive impact of 4K?

Share Button

New Device Types are Driving Tech Company’s Organizational Change

The dramatic growth in smartphone, tablet and vertical market portable devices e.g., medical instrumentation is starting to drive major change at big tech companies.  If you watch product offerings and new positioning of Google, Microsoft, and Apple, you’ll see that significant investments are geared toward the mobile consumer and mobile information worker. These products require new device technologies such as flexible silicon and Thin flexible substrates for interconnect technology.


A good example of this is the lighting fast reorganization of Intel after Brian Krzanich’s installation as CEO.  Under Otellini’s tenure Intel missed a huge opportunity to become the chip supplier to Apple for iPhones even though the traditional conservative “number crunching/data driven” advice given to Paul Otellini went against his gut, Intel passed on the opportunity.  Their analysis misjudged the potential volume by a factor of 100 and over estimated the costs of manufacturing.  Basically the conservative mindset of “group think” there projected the iPhone as a losing business proposition.  See here  The new CEO has immediately reorganized the global enterprise to make it more agile and created a New devices Group reporting directly to him.  See here

Hopefully this will open Intel up to address new markets and new types of Si architecture along with manufacturing processes. Also the industry will hopefully follow Intel’s lead and innovate even more in this hot technology domain When you look at flexible silicon and thin film technologies, the future is clear. New companies will grow to tech giants that embrace this technology and benefit from lessons learned from the old tech giants.

Flexable Si
We will all use this soon
Share Button

When Moore’s Law is not Enough

When we look at the history of the PC industry, we see that while Moore’s Law is fantastic, it is always outpaced by consumer demand. Market expanding software solutions can be developed faster than hardware solutions to develop but are frequently performance constrained by the limits of running on general purpose processors. Eventually IHVs see a large enough market and have time for development of custom silicon to parallelize the process. This lag time between when the problem is first noticed and when it’s solved in silicon can be referred to as the “Wilson Gap” aphras coined by some Microsoft employees who worked with me and quoted my assessment as “Information consumer appetite/demand will always outpace CPU capability” which I stated in a meeting regarding complex computational transforms.

By doing a simple analysis of this “Wilson Gap” over a series of technologies we can see some very interesting patterns:

Wilson Gap analysis
Wilson Gap analysis

*Note: This illustration is based on 2011 estimates

The vertical axis represents the number of years a particular technology was on the market in software-only form before it was introduced in silicon as an ASIC (Application Specific Integrated Circuits). Based on this data I would like to postulate that companies like Microsoft & Google have direct bearing on these figures, and that in many cases they can significantly reduce the Wilson Gap. But first, let’s review the situation a little further.

How the SW Industry Fights the Wilson Gap

While the flexibility general purpose CPU offers imaginative engineers the ultimate design surface, it likewise has the inherent limitation that code must be reduced to a lowest common denominator, that being the CPU instruction set. Time and again, this limitation has caused a Wilson Gap in what consumers want and what the PC platform is able to inherently deliver.

For Many of Today’s Needs Moore’s Law is too Slow

As the previous graph illustrates, the Wilson Gap was a limiting factor in the potential market for specific technologies, when the CPU was not fast enough for the consumer demand of floating point operations. Likewise, at various times throughout PC history, the CPU has not kept up with demand for:

  • Digital Signal Processing (DSP)
  • 3D Graphics
  • SSL Processing (encompassing 3DES, RSA, AES)
  • MPEGx Encoding/Decoding
  • Windows Media Encoding/Decoding
  • TCP/IP offloading
  • XML Parsing and Canonicalization

ASICs help reduce the Wilson Gap

When Moore’s Law is too slow we traditionally rely on ASICs to fill the Wilson Gap. In all of the examples above (Math Coprocessor, DSP, 3D, 3DES, RSA, MPG, etc…) we now have fairly low-cost ASICs that can solve the performance issue. Total time to solution and time to money are far too long for current industry economic conditions. These (ASIC) processors will typically accelerate a task, off-load a task or perform some combination of the two. But for the remainder of this paper we’ll use the term “accelerate” to include acceleration that encompasses CPU off-loading.

The Downside to ASIC Solutions

Unfortunately ASICs are inherently slow to market and are a very risky business proposition. For example, the typical ASIC takes 8 to 12 months to design, engineer and manufacture. Thus their target technologies must be under extremely high market demand before companies will make the bet and begin the technology development and manufacturing process. As a result, ASICs will always be well behind the curve of information consumer requirements served by cutting edge software.

Another difficulty faced in this market is that ASIC or Silicon Gate development is very complex, requiring knowledge of VHDL or Verilog. The efficient engineering of silicon gate-oriented solutions requires precision in defining the problem space and architecting the hardware solution. Both of these precise processes take a long time.

FPGAs further reduce the Wilson Gap

A newer approach to reducing the Wilson Gap that is gaining popularity is the use of Field Programmable Gate Arrays (or FPGAs). FPGAs provide an interim solution between ASICs and software running on a general purpose CPU. They allow developers to realign the silicon gates on a chip and achieve performance benefits on par with ASICs, while at the same time allowing the chip to be reconfigured with updated code or a completely different algorithm. Modern development tools are also coming on line that reduce the complexity of programming these chips by adding parallel extensions to the C language, and then compiling C code directly to Gate patterns. One of the most popular examples of this is Handel-C (out of Cambridge).

The Downside to FPGA Solutions

Typically FPGAs are 50% to 70% of the speed of an identical ASIC solution. However, FPGAs are more typically geared to parallelize algorithms and are configurable so as to received updates, and leverage a shorter development cycle (http://www.xilinx.com/products/virtex/asic/methodology.htm). These factors combine to extend the lifespan of a given FPGA-based solution further than an ASIC solution.

A Repeating Pattern

Looking at the market for hardware accelerators over the past 20 years we see a repeating pattern of:

  1. First implemented on the general purpose CPU
  2. Migrated to ASIC/DSP once the market is proven

Next the technology typically takes one of two paths:

  1. The ASIC takes on a life of its own and continues to flourish (such as 3D graphics) outside of the CPU (or embedded back down on the standard motherboard)
  2. The ASIC becomes obsolete as Moore’s Law brings the general purpose CPU up to par with the accelerator by the new including instructions required.

Now let’s examine two well known examples in the Windows space where the Wilson Gap has been clearly identified and hardware vendors are in the development cycle of building ASIC solutions to accelerate our bottlenecks.

Current Wilson Gaps

Our first example is in Windows Media 9 Decoding; ASIC hardware is on its way thanks to companies such as ATI, NVIDIA and others. This will allow the playback of HD-resolution content such as the new Terminator 2 WM9 DVD on slower performance systems. Another example here is in TCP Offload Engines (TOE); which have recently arrived on the scene. Due to the extensibility of both the Windows’ Media and Networking stacks, both of these technologies are fairly straightforward to implement.

Upcoming Wilson Gaps – Our Challenge

However, moving forward the industry faces other technologies which don’t have extensibility points for offloading or acceleration. This lack of extensibility has lead to duplication of effort across various product teams, but not duplication in a competitive sense (which is usually good), but more of a symbiotic duplication of effort, increasing the cost of maintenance and security.

Share Button

What’s Between Now and 2035???

My Conclusion on Si Architecture Trends and thier ecosystem impact

Today’s Si companies must track the key trends in Si technology development, assembly test, Nanotechnology, Cooling, Emerging Research, Virtualization, acceleration and Si Complex Architectures to help drive their product teams in close collaboration with other Si vendors to keep the enterprise in a thought leadership position contemporary with the Silicon Industry along with consumer demands.

This blog is intended to document key technology trends and issues I feel will have a major impact betwen now and 2035. The following areas will be covered:

Silicon technology, architecture  processes and innovation

  • Lithography Evolution enables “Moore than Moore”
  • Size, Nano-techniques & Subatomic wire
  • Cooling via refrigeration or wind
  • Cores, components and the Si complex
  • Thinner materials E.G., nanotubes & self assembly
  • Faster Transistors E.G., Ultrathin Graphene
  • Optical Computing, Molecular Computing
  • Quantum Computing, Biological Computing
Integration Level Components/Chip,   Moore’s Law
Cost Cost   Per Function
Speed Microprocessor   Throughput
Power Laptop   or Cell Battery Life
Compactness Small   and Light-weight Products
Functionality Nonvolatile   Memory, Imager

Software As a Service
Cloud Computing SW & HW trends to watch
System Architecture

  • System Drivers
  • Design
  • Mixed-signal Tech in Wireless Communications
  • Emerging Research Devices
  • Front End Processes
  • Lithography
  • Interconnect
  • Factory Integration, Assembly & Test.

Enterprise IT Architecture
Applications Infrastructure as it relates to all of the above.

Share Button