The dramatic growth in smartphone, tablet and vertical market portable devices e.g., medical instrumentation is starting to drive major change at big tech companies. If you watch product offerings and new positioning of Google, Microsoft, and Apple, you’ll see that significant investments are geared toward the mobile consumer and mobile information worker. These products require new device technologies such as flexible silicon and Thin flexible substrates for interconnect technology.
A good example of this is the lighting fast reorganization of Intel after Brian Krzanich’s installation as CEO. Under Otellini’s tenure Intel missed a huge opportunity to become the chip supplier to Apple for iPhones even though the traditional conservative “number crunching/data driven” advice given to Paul Otellini went against his gut, Intel passed on the opportunity. Their analysis misjudged the potential volume by a factor of 100 and over estimated the costs of manufacturing. Basically the conservative mindset of “group think” there projected the iPhone as a losing business proposition. See here The new CEO has immediately reorganized the global enterprise to make it more agile and created a New devices Group reporting directly to him. See here
Hopefully this will open Intel up to address new markets and new types of Si architecture along with manufacturing processes. Also the industry will hopefully follow Intel’s lead and innovate even more in this hot technology domain When you look at flexible silicon and thin film technologies, the future is clear. New companies will grow to tech giants that embrace this technology and benefit from lessons learned from the old tech giants.
The IaaS and PaaS cloud models allow architects to decouple components of an application or enterprise system into the lowest functional components and design for failure how these pieces can be utilized as “independent black boxes” to form an application. This allows for provisioning elasticity and resiliency of individual components and their states in the inevitable event of hardware or software failure.
One of the least understood impacts of this approach is that the message queues used by components can become the most important elements in assuring availability, scalability and ultimate reliability. In essence the messaging infrastructure components become the most critical parts of an applications infrastructure designed to exploit elasticity. If you envision these Enterprise Apps as complex organisms, then the message queues and their reliability become mission critical organs of the living, agile enterprise architecture. Components such as controller apps, databases and such should be isolated allowing buffering of request along with replies making the network of components more durable and state independent facilitating failover and scalability.
The influx of companies trying to exploit “Big Data” as a new revenue source has provided a number of workforce challenges for senior managers. Do they hire very smart math folk to devise new algorithms and create a “secret sauce” for their products? Do they develop or acquire superiors hardware that leverages new Si technology to better process big data? Do they form teams that have practical business experience to ferret out which real problems exist in the marketplace and what approach to analytics will be truly appreciated by customers’ end users?
Well the answer is … a little of each! The most important thing many companies are missing today is that they focus on the technology and technologist in their hiring decisions but not the business logic experience. There is great value in having teams of technology folk embedded with thought leadership coming from experience. Bright, eager, smart people, with minimal experience know theory and math but don’t know human behavior in business. They also don’t have the understanding of the technology assimilation hurdles that form huge barriers to rapid adoption and market share growth. The targeted customer base will often need help understanding:
How much data do we have?
What is actionable information contained in the big data fog?
How much information do we need to make decisions?
What changes in data are significant and require action?
What is a practical “on ramp” to use big data technology?
The bottom line is an integrated team of smart technologist stewarded through development with experienced thought leadership will result in the “BIG THEORY” required to make big data solutions palatable and easily digested by the human organism we call an enterprise. Where in reality, meaningful mobile visualization transforms BIG DATA into actionable information.
As enterprises come to grips with Cloud Computing demands (both internal and external) the IT groups will soon realize that the Hybrid model is the “best fit” for the new Enterprise IT organization. This will also force a closer alignment with various business units and provoke a rethink of the costing models for IT. can IT really stay a coast center given the inevitable variable demand curve of Cloud Services? Enterprise IT shops will consider various vendors (E.G., Azure, HP, VMware, Amazon & others) in light of the matrix created by matching customers service type needs to flexibility of leveraging a vendors Cloud Service offerings to suit the enterprise’s complex business needs. the ease of entrance and exit will be the driving forces behind vendor selection not just cost but ease of achieving true operational excellence.
Finance, Corporate Strategy, Biz Units and IT will collaborate to determine which “flavor” of Cloud Services are needed. For example the SaaS, IaaS or PaaS models may all be needed in the view of the business objectives. The decision of what kind of service offerings to implement will drive IT’s customers to do a functional decomposition of existing applications and distil what services are used today. This will lead to an “applicability analysis” of which type of Cloud implementation makes good business sense. Some may choose from Cloud Platform as a Service, Cloud Infrastructure as a Service, Cloud as a Software Service model. These may also include convent “off ramp & on ramp” strategies to allow customers to switch as circumstances dictate. An example of the choices is illustrated below:
As we look at today’s complex product, business and end-user requirements, some key ideas must be addressed to achieve profit margin goals. Almost all electronic products today utilize software, hardware and multiple suppliers/vendors to complete product functionality. The chart below is meant to trigger thinking about key items that must be included in the today’s electronic product engineering process.
To be successful going forward in voice and data communications, companies must be able to integrate the cost benefits of VoIP wire line carrier transmission with cellular/mobile end-user voice and data product suites. E.G., AT&T, COX et al are bundling offerings to increase conglomerate market share. Wireless and Wire line service offerings – previously, large corporations have spent billions of dollars expanding their PSTN network. As mobile and cellular technologies became available (and dominant) additional investments were spent creating the second stage of telecommunications. Today, trends are foretelling the final tale in these under-connected technologies. With the introduction of mobile data communications such as 8.02.11 (Wi-Fi) and 8.02.16 (WiMax), the Next Generation Networks (NGNs) will begin to support both data and voice communications, to a wireless end-user, while taking advantage of the cost-effective VoIP network backbone. Smart Phone carriers have already successfully integrated a VoIP/Wireless package for this product-type powerhouse.
These companies also must be able to saturate their combined products suites into the international origination retail marketplace. As we have seen in the US domestic marketplace, companies are offering bundled services to customers, including:
Enhanced voice products that are easily integrated into a VoIP switching platform such as voicemail, conference calling, international toll-free, and the personal secretary (follow me number)
Integration of voice and data products together such as 802.11 wireless hot spot services, local phone services, and cellular (CDMA, GPS) subscriptions.
Initiatives with strategic partners
A key driver to ride this momentum is the ability to penetrate the market with affordable voice services, then maximize product suites with customer up-sell, using the Internet for advertising, provisioning, and selling, eliminating large personnel infrastructures that have, in the past, proved to be too costly for the PSTN Contribution model shrinking gap. In addition, these marketing plans can be handled virtually from one corporate operating entity, for each geographic/ethnic market segment.
Finally, success will be contingent on companies familiarity with the sensitive balance between Revenues and associated Cost Minutes of Use (CMOUs), while building and maintaining a low-cost ongoing capital expense network.
Revenue vs. Cost Minutes – Although Operating Margins will continue to decrease by MOU, these declines will be offset by the strong demand for wireless data and voice services, both standard and enhanced, worldwide. The addition of wire line and wireless customers in the less developed countries will gain huge market segment and will produce high Revenue dollar figures until the international accounting recalibration of the industry in these “start up” markets.
Building a low-cost enhanced service network. While VoIP networks have rendered old business models obsolete, it is expected to drive down the cost structure of providing service. By building a MPLS access network for VoIP, Telecom Engineers, Operators, and Technicians are now located virtually, significant reducing operating costs and initiating an increase in service quality. Because large domestic providers have billions of dollars invested in PSTN networks. Understanding that cost reductions will be a necessity to stay competitive, migrating away from PSTN networks to VoIP will produce a large dilemma of instigating and executing an expense plan, with a result in reduced revenues. RBOC and telephone carrier leaders seem to be slightly behind the curve of independent players who are building their model on the future, instead of the past. How these large conglomerates handle this transition and these financial challenges will foretell their viability and future. Simply put tomorrow’s phone company will not be your parents’ phone company.
I have a controversial view that the new SaaS adoption rates will be served more by focusing on user benefits VS “tech-selling” buzzwords. A practical example of this would be that I believe the growth in numbers of the “boomer” generation is going to drive more customers to the SaaS/IaaS platform providers. E.G., MyGait below offers not only a computer system tuned to older user needs (magnification, large keys, etc.) but also a service program and financing that essentially signs up the buyer to a SaaS model by selling features and benefits they need.
A combination of color coding and Input Method Editor (IME) options is perfectly suited for the older user in the US and international community.
A good working example of this is the lighting fast adoption rate in Mobile TelCo of the Windows Phone & Android applications.
When we look at the history of the PC industry, we see that while Moore’s Law is fantastic, it is always outpaced by consumer demand. Market expanding software solutions can be developed faster than hardware solutions to develop but are frequently performance constrained by the limits of running on general purpose processors. Eventually IHVs see a large enough market and have time for development of custom silicon to parallelize the process. This lag time between when the problem is first noticed and when it’s solved in silicon can be referred to as the “Wilson Gap” aphras coined by some Microsoft employees who worked with me and quoted my assessment as “Information consumer appetite/demand will always outpace CPU capability” which I stated in a meeting regarding complex computational transforms.
By doing a simple analysis of this “Wilson Gap” over a series of technologies we can see some very interesting patterns:
*Note: This illustration is based on 2011 estimates
The vertical axis represents the number of years a particular technology was on the market in software-only form before it was introduced in silicon as an ASIC (Application Specific Integrated Circuits). Based on this data I would like to postulate that companies like Microsoft & Google have direct bearing on these figures, and that in many cases they can significantly reduce the Wilson Gap. But first, let’s review the situation a little further.
How the SW Industry Fights the Wilson Gap
While the flexibility general purpose CPU offers imaginative engineers the ultimate design surface, it likewise has the inherent limitation that code must be reduced to a lowest common denominator, that being the CPU instruction set. Time and again, this limitation has caused a Wilson Gap in what consumers want and what the PC platform is able to inherently deliver.
For Many of Today’s Needs Moore’s Law is too Slow
As the previous graph illustrates, the Wilson Gap was a limiting factor in the potential market for specific technologies, when the CPU was not fast enough for the consumer demand of floating point operations. Likewise, at various times throughout PC history, the CPU has not kept up with demand for:
Digital Signal Processing (DSP)
SSL Processing (encompassing 3DES, RSA, AES)
Windows Media Encoding/Decoding
XML Parsing and Canonicalization
ASICs help reduce the Wilson Gap
When Moore’s Law is too slow we traditionally rely on ASICs to fill the Wilson Gap. In all of the examples above (Math Coprocessor, DSP, 3D, 3DES, RSA, MPG, etc…) we now have fairly low-cost ASICs that can solve the performance issue. Total time to solution and time to money are far too long for current industry economic conditions. These (ASIC) processors will typically accelerate a task, off-load a task or perform some combination of the two. But for the remainder of this paper we’ll use the term “accelerate” to include acceleration that encompasses CPU off-loading.
The Downside to ASIC Solutions
Unfortunately ASICs are inherently slow to market and are a very risky business proposition. For example, the typical ASIC takes 8 to 12 months to design, engineer and manufacture. Thus their target technologies must be under extremely high market demand before companies will make the bet and begin the technology development and manufacturing process. As a result, ASICs will always be well behind the curve of information consumer requirements served by cutting edge software.
Another difficulty faced in this market is that ASIC or Silicon Gate development is very complex, requiring knowledge of VHDL or Verilog. The efficient engineering of silicon gate-oriented solutions requires precision in defining the problem space and architecting the hardware solution. Both of these precise processes take a long time.
FPGAs further reduce the Wilson Gap
A newer approach to reducing the Wilson Gap that is gaining popularity is the use of Field Programmable Gate Arrays (or FPGAs). FPGAs provide an interim solution between ASICs and software running on a general purpose CPU. They allow developers to realign the silicon gates on a chip and achieve performance benefits on par with ASICs, while at the same time allowing the chip to be reconfigured with updated code or a completely different algorithm. Modern development tools are also coming on line that reduce the complexity of programming these chips by adding parallel extensions to the C language, and then compiling C code directly to Gate patterns. One of the most popular examples of this is Handel-C (out of Cambridge).
The Downside to FPGA Solutions
Typically FPGAs are 50% to 70% of the speed of an identical ASIC solution. However, FPGAs are more typically geared to parallelize algorithms and are configurable so as to received updates, and leverage a shorter development cycle (http://www.xilinx.com/products/virtex/asic/methodology.htm). These factors combine to extend the lifespan of a given FPGA-based solution further than an ASIC solution.
A Repeating Pattern
Looking at the market for hardware accelerators over the past 20 years we see a repeating pattern of:
First implemented on the general purpose CPU
Migrated to ASIC/DSP once the market is proven
Next the technology typically takes one of two paths:
The ASIC takes on a life of its own and continues to flourish (such as 3D graphics) outside of the CPU (or embedded back down on the standard motherboard)
The ASIC becomes obsolete as Moore’s Law brings the general purpose CPU up to par with the accelerator by the new including instructions required.
Now let’s examine two well known examples in the Windows space where the Wilson Gap has been clearly identified and hardware vendors are in the development cycle of building ASIC solutions to accelerate our bottlenecks.
Current Wilson Gaps
Our first example is in Windows Media 9 Decoding; ASIC hardware is on its way thanks to companies such as ATI, NVIDIA and others. This will allow the playback of HD-resolution content such as the new Terminator 2 WM9 DVD on slower performance systems. Another example here is in TCP Offload Engines (TOE); which have recently arrived on the scene. Due to the extensibility of both the Windows’ Media and Networking stacks, both of these technologies are fairly straightforward to implement.
Upcoming Wilson Gaps – Our Challenge
However, moving forward the industry faces other technologies which don’t have extensibility points for offloading or acceleration. This lack of extensibility has lead to duplication of effort across various product teams, but not duplication in a competitive sense (which is usually good), but more of a symbiotic duplication of effort, increasing the cost of maintenance and security.