As we start to see the uptake in 4K video content, suppliers of CPUs, NIC (Network Interface Cards), networks (LAN, WLAN, Wi-Fi) and storage technologies will all be struggling to “step up to the plate” in meeting the challenges of this disruptive Video format. Also IAAS platform providers will face huge challenges to configure cloud components that can be rapidly provisioned for 4K content or Video Streaming. Even the security industry will be affected regarding the video surveillance infrastructure (see this Video Security Magazine article).
This is a Technologies Strategic Directions “Sleeping Inflection Point” for multiple industries, manufacturers, eworkers and information consumers.
Ultra-high definition (UHD) resolution is 3840×2160 Pixels now used in displays and broadcast., This does not equal 4K (4096×2160 Pixels) used in digital cinema. People tend to used them interchangeably but there is a significant difference in impact on the networking bandwidth required to service consumption of 4K.
We all are aware from a display technology perspective that TVs are now offering this content. However, how about other network and computer infrastructure components? When will they be able to handle the disruptive impact of 4K?
Over the course of four days, 2-5 March 2015, Mobile World Capital Barcelona will host the world’s greatest mobile event: Mobile World Congress. See this website for more info: http://www.mobileworldcongress.com/
The mobile communications revolution is driving the world’s major technology breakthroughs. From wearable devices to connected cars and homes, mobile technology is at the heart of worldwide innovation. As an industry, we are connecting billions of people to the transformative power of the Internet and mobilising every device we use in our daily lives.
In short, the hole world is on The Edge of Innovation, and the possibilities are endless. The 2015 GSMA Mobile World Congress will convene industry leaders, visionaries and innovators to explore the trends that will shape mobile in the years ahead.
About the Event
Here are the components that make up this industry-leading event:
A world-class thought-leadership Conference featuring visionary keynotes and panel discussions
A cutting-edge product and technology Exhibition featuring more than 1,900 exhibitors
The world’s best venue for seeking industry opportunities, making deals, and networking
App Planet, the Centre of the Mobile Apps Universe, where the mobile app community gathers to learn, network and engage with innovators
I’ve been digesting and expanding on an interesting white paper authored by the Microsoft Azure Incubation team titled: Building the Internet of Things– Early learnings from architecting solutions focused on predictive maintenance. I agree with the premise tha ubiquity in connection technology will be the key enabler and that predictive maintenance will probably be required to instantiate a true global ubiquitous connection state. Recently MS is changing their terminology from Internet of Things to Internet of Everything (IoE). Here I use them interchangeably A key technical enabler of the Internet of Things (IoT) is ubiquitous connectivity. A week or so ago I blogged about a new technology called Active Steering(TM) which should be the winner in patented connectivity hardware/Software/Firmware for antenna products.
Just imagine that the antenna on your device was constantly sampling the wireless signals around your location and finding the strongest source and directing the focus of the antenna to that source. That is what an Active Steering antenna does on your phone, tablet or PC. By using this technique the system is also performing predictive maintenance on the connectivity configuration for your specific device and location. Let’s first look at the Open Systems Interconnection (OSI) model. Even though the Internet model uses a simplified abstraction, the models in the previous figure and the associated well-known logical protocols are comparable. Application-layer protocols are not concerned with the lower-level layers in the stack other than being aware of the key attributes of those layers, such as IP addresses and ports. The right side of the figure shows the logical protocol breakdown transposed over the OSI model and the TCP/IP model.
Special-purpose devices differ not only in the depth of their relationship with back-end services, but in the interaction patterns of these services when compared to information-centric devices because of their role as peripherals. They are not the origin of command-and-control gestures; instead, they typically contribute information to decisions, and receive commands as a result of decisions. The decision-maker does not interface with them locally, and the device acts as an immediate proxy; the decision-maker is remotely connected and might be a machine. We usually classify interaction patterns for special-purpose devices into the four categories indicated in the following figure.
All of these models need uninterrupted connectivity to enable the ultimate user experience that Windows 10 could offer with the addition of Active Steering Technologies at the Platform level.
Here in San Diego we have a number of new and innovative technology companies. One of which I’m proud to be connected to by Rick Johnson (their CFO) who was the CFO of Tarari and a fellow Intel Alumni. The company is Ethertronics and makes chip-sets that implement various types of antenna functionality for the mobile market. Their latest introduction at this year’s CES is ACTIVE STEERING
The industry is preparing for a 1000x increase in data traffic while carriers are running out of capacity on their networks. The solution to this daunting challenge isn’t as simple as just buying more spectrum or adding more infrastructure in the form of cellular towers, small cells and Distributed Antenna Systems (DAS) equipment. The wireless devices themselves are key to increasing capacity through spectral efficiency. Active Steering technology is crucial to improve device efficiency and performance.
Ethertronics’ Active Steering technology provides
Significantly faster throughput – higher data rates
Minimizes unwanted interference
Seamless handoffs between towers
Better connected experience for users
Increased spectral efficiency – more capacity to carriers’ networks
The dramatic growth in smartphone, tablet and vertical market portable devices e.g., medical instrumentation is starting to drive major change at big tech companies. If you watch product offerings and new positioning of Google, Microsoft, and Apple, you’ll see that significant investments are geared toward the mobile consumer and mobile information worker. These products require new device technologies such as flexible silicon and Thin flexible substrates for interconnect technology.
A good example of this is the lighting fast reorganization of Intel after Brian Krzanich’s installation as CEO. Under Otellini’s tenure Intel missed a huge opportunity to become the chip supplier to Apple for iPhones even though the traditional conservative “number crunching/data driven” advice given to Paul Otellini went against his gut, Intel passed on the opportunity. Their analysis misjudged the potential volume by a factor of 100 and over estimated the costs of manufacturing. Basically the conservative mindset of “group think” there projected the iPhone as a losing business proposition. See here The new CEO has immediately reorganized the global enterprise to make it more agile and created a New devices Group reporting directly to him. See here
Hopefully this will open Intel up to address new markets and new types of Si architecture along with manufacturing processes. Also the industry will hopefully follow Intel’s lead and innovate even more in this hot technology domain When you look at flexible silicon and thin film technologies, the future is clear. New companies will grow to tech giants that embrace this technology and benefit from lessons learned from the old tech giants.
The IaaS and PaaS cloud models allow architects to decouple components of an application or enterprise system into the lowest functional components and design for failure how these pieces can be utilized as “independent black boxes” to form an application. This allows for provisioning elasticity and resiliency of individual components and their states in the inevitable event of hardware or software failure.
One of the least understood impacts of this approach is that the message queues used by components can become the most important elements in assuring availability, scalability and ultimate reliability. In essence the messaging infrastructure components become the most critical parts of an applications infrastructure designed to exploit elasticity. If you envision these Enterprise Apps as complex organisms, then the message queues and their reliability become mission critical organs of the living, agile enterprise architecture. Components such as controller apps, databases and such should be isolated allowing buffering of request along with replies making the network of components more durable and state independent facilitating failover and scalability.
As we look at today’s complex product, business and end-user requirements, some key ideas must be addressed to achieve profit margin goals. Almost all electronic products today utilize software, hardware and multiple suppliers/vendors to complete product functionality. The chart below is meant to trigger thinking about key items that must be included in the today’s electronic product engineering process.
When we look at the history of the PC industry, we see that while Moore’s Law is fantastic, it is always outpaced by consumer demand. Market expanding software solutions can be developed faster than hardware solutions to develop but are frequently performance constrained by the limits of running on general purpose processors. Eventually IHVs see a large enough market and have time for development of custom silicon to parallelize the process. This lag time between when the problem is first noticed and when it’s solved in silicon can be referred to as the “Wilson Gap” aphras coined by some Microsoft employees who worked with me and quoted my assessment as “Information consumer appetite/demand will always outpace CPU capability” which I stated in a meeting regarding complex computational transforms.
By doing a simple analysis of this “Wilson Gap” over a series of technologies we can see some very interesting patterns:
*Note: This illustration is based on 2011 estimates
The vertical axis represents the number of years a particular technology was on the market in software-only form before it was introduced in silicon as an ASIC (Application Specific Integrated Circuits). Based on this data I would like to postulate that companies like Microsoft & Google have direct bearing on these figures, and that in many cases they can significantly reduce the Wilson Gap. But first, let’s review the situation a little further.
How the SW Industry Fights the Wilson Gap
While the flexibility general purpose CPU offers imaginative engineers the ultimate design surface, it likewise has the inherent limitation that code must be reduced to a lowest common denominator, that being the CPU instruction set. Time and again, this limitation has caused a Wilson Gap in what consumers want and what the PC platform is able to inherently deliver.
For Many of Today’s Needs Moore’s Law is too Slow
As the previous graph illustrates, the Wilson Gap was a limiting factor in the potential market for specific technologies, when the CPU was not fast enough for the consumer demand of floating point operations. Likewise, at various times throughout PC history, the CPU has not kept up with demand for:
Digital Signal Processing (DSP)
SSL Processing (encompassing 3DES, RSA, AES)
Windows Media Encoding/Decoding
XML Parsing and Canonicalization
ASICs help reduce the Wilson Gap
When Moore’s Law is too slow we traditionally rely on ASICs to fill the Wilson Gap. In all of the examples above (Math Coprocessor, DSP, 3D, 3DES, RSA, MPG, etc…) we now have fairly low-cost ASICs that can solve the performance issue. Total time to solution and time to money are far too long for current industry economic conditions. These (ASIC) processors will typically accelerate a task, off-load a task or perform some combination of the two. But for the remainder of this paper we’ll use the term “accelerate” to include acceleration that encompasses CPU off-loading.
The Downside to ASIC Solutions
Unfortunately ASICs are inherently slow to market and are a very risky business proposition. For example, the typical ASIC takes 8 to 12 months to design, engineer and manufacture. Thus their target technologies must be under extremely high market demand before companies will make the bet and begin the technology development and manufacturing process. As a result, ASICs will always be well behind the curve of information consumer requirements served by cutting edge software.
Another difficulty faced in this market is that ASIC or Silicon Gate development is very complex, requiring knowledge of VHDL or Verilog. The efficient engineering of silicon gate-oriented solutions requires precision in defining the problem space and architecting the hardware solution. Both of these precise processes take a long time.
FPGAs further reduce the Wilson Gap
A newer approach to reducing the Wilson Gap that is gaining popularity is the use of Field Programmable Gate Arrays (or FPGAs). FPGAs provide an interim solution between ASICs and software running on a general purpose CPU. They allow developers to realign the silicon gates on a chip and achieve performance benefits on par with ASICs, while at the same time allowing the chip to be reconfigured with updated code or a completely different algorithm. Modern development tools are also coming on line that reduce the complexity of programming these chips by adding parallel extensions to the C language, and then compiling C code directly to Gate patterns. One of the most popular examples of this is Handel-C (out of Cambridge).
The Downside to FPGA Solutions
Typically FPGAs are 50% to 70% of the speed of an identical ASIC solution. However, FPGAs are more typically geared to parallelize algorithms and are configurable so as to received updates, and leverage a shorter development cycle (http://www.xilinx.com/products/virtex/asic/methodology.htm). These factors combine to extend the lifespan of a given FPGA-based solution further than an ASIC solution.
A Repeating Pattern
Looking at the market for hardware accelerators over the past 20 years we see a repeating pattern of:
First implemented on the general purpose CPU
Migrated to ASIC/DSP once the market is proven
Next the technology typically takes one of two paths:
The ASIC takes on a life of its own and continues to flourish (such as 3D graphics) outside of the CPU (or embedded back down on the standard motherboard)
The ASIC becomes obsolete as Moore’s Law brings the general purpose CPU up to par with the accelerator by the new including instructions required.
Now let’s examine two well known examples in the Windows space where the Wilson Gap has been clearly identified and hardware vendors are in the development cycle of building ASIC solutions to accelerate our bottlenecks.
Current Wilson Gaps
Our first example is in Windows Media 9 Decoding; ASIC hardware is on its way thanks to companies such as ATI, NVIDIA and others. This will allow the playback of HD-resolution content such as the new Terminator 2 WM9 DVD on slower performance systems. Another example here is in TCP Offload Engines (TOE); which have recently arrived on the scene. Due to the extensibility of both the Windows’ Media and Networking stacks, both of these technologies are fairly straightforward to implement.
Upcoming Wilson Gaps – Our Challenge
However, moving forward the industry faces other technologies which don’t have extensibility points for offloading or acceleration. This lack of extensibility has lead to duplication of effort across various product teams, but not duplication in a competitive sense (which is usually good), but more of a symbiotic duplication of effort, increasing the cost of maintenance and security.
My Conclusion on Si Architecture Trends and thier ecosystem impact
Today’s Si companies must track the key trends in Si technology development, assembly test, Nanotechnology, Cooling, Emerging Research, Virtualization, acceleration and Si Complex Architectures to help drive their product teams in close collaboration with other Si vendors to keep the enterprise in a thought leadership position contemporary with the Silicon Industry along with consumer demands.
This blog is intended to document key technology trends and issues I feel will have a major impact betwen now and 2035. The following areas will be covered: