Tag Archives: Technology Trends

Even After 14 Generations, Intel’s 2023 “Meteor Lake” CPU will surprise you

Look out GPU makers!

Intel’s use of “Ray Tracing” (a rendering process lighting up a rendered scene by imitating how the human eye biologically sees light). That’s not easy but it allows the Si structures on tiled GPU circuits to “view” an artificial scene and render it as natural.

Now that processing demands enormously complex physics calculations WRT how light actually behaves. This is an amazingly ambitious Si engineering feat, from both computational tasking development and FAB wafer processing perspectives.

The impact on consumers should be felt in Gaming, Media Content Production, Streaming Server Processing, and other GPU intensive applications.

So, finally with the wave of 5G infrastructures being implemented around the globe, Cloud, Mobile and IoT implementations will provide an exciting “Game Changer” in our enjoyment of using technologies.

Share Button

US Gov Investment in Semiconductor Industry will drive Jobs, Construction and R&D.

Look for new CPU/GPU, Networking, Storage Silicon and Foundry Services Innovation!

With the passage of the US’ new “Chip Act 2022” the semiconductor industry will have access to huge sums funded by the Federal Government. These funds will be used to move manufacturing back on shore, create new jobs in operations and construction of FABs along with incentivizing R&D technology innovation.

The CHIPS Act’s investment tax credit and subsidies will be crucial steps to “bolster the semiconductor supply chain based in the United States and keep pace with industry incentives offered by other regions,” said Ajit Manocha, chief executive of trade group Semi, in a statement last week.

Biden Set to Sign Law to Pump $53 Billion Into US Chip Manufacturing

The semiconductor industry has been great for established companies like Intel, Samsung, AMD, Qualcomm and Nvidia, but the changing needs of applications, especially those in the data center, have created an opening for a new wave of startups that are creating new kinds of silicon solutions for compute, storage and networking.

New chip startups run the full gambit from general-purpose processors that can outperform Intel’s and AMD CPUs to accelerators that can speed up AI and deep learning workloads. Also, IDC is projecting global semiconductor revenue to continue growing in 2023 after years of pandemic-induced challenges, these semiconductor startups could benefit from what I once coauthored a “Think week’ paper about titled “When Moore’s Law is not Enough.”

Companies to watch in 2022-23:

Obviously, Intel, AMD, Nvidia and Qualcomm will benefit immensely from the Chip Act. However, these startups are taking on semiconductor heavyweights like Intel and Nvidia with new kinds of silicon solutions for compute, storage, and networking, many of which are headed for the data center. These are a few Si Enterprises to keep your eye on.

  • Ampere Computing targets Intel and AMD in the data center with Arm-based CPUs it says can outperform the competition. The Santa Clara, Calif.-based startup has Tencent, Bytedance, Equinix, CloudFlare and uCLoud as customers in addition to engagements with Microsoft and Oracle. The chipmaker also has OEM coverage beyond Gigabyte and Wiwynn to Foxconn and Inspur Group. The company designs its own custom cores for processors, going beyond its original strategy of using core designs from Arm for its 80-core Altra and 128-core Altra Max CPUs. The company’s CPUs are now publicly available in public instances provided by Oracle Cloud Infrastructure.
  • Cerebras Systems is targeting the AI compute market with its large Wafer Scale Engine 2 chip, that it calls the “largest AI processor ever made.” The Los Altos, Calif.-based startup announced WSE-2 chip — which comes with 2.6 trillion transistors, 850,000 cores and 40 GB of on-chip memory — and is “orders of magnitude larger and more performant than any competing GPU on the market.” The WSE-2 powers the startup’s purpose-built CS-2 AI system, which Cerebras says can deliver “more compute performance at less space and less power than any other system.” The startup’s systems have been adopted by the University of Edinburgh’s EPCC supercomputing center, GlaxoSmithKline, Tokyo Electron Devices as well as the U.S. Department of Energy’s Argonne National Laboratory and Livermore National Laboratory.
  • EdgeQ targets Intel and other players in the 5G infrastructure space with a new, AI-infused modem that can replace multiple components in a base station at a fraction of the cost. The Santa Clara, Calif.-based startup showed its synonymous “base-station-on-a-chip” 2021, promising a 50 percent reduction in total cost of ownership for 5G base stations over competing solutions. The startup showed off the RISC-V-based chip after emerging out of stealth mode in November of 2020 with $51 million in funding, and it has since added former Qualcomm CEO and Executive Chairman Paul Jacobs and former Qualcomm CTO Matt Grob as advisors. The EdgeQ chip can perform AI functions for edge computing applications while also using AI to improve network capabilities.
  • Fungible targets making hyperscale data centers available for any organization with its turn-key Fungible Data Center solution that is powered by its namesake data processing unit. With the already launched data center solution, the Santa Clara, Calif.-based startup claims it can easily slice and dice compute, storage, network, and GPU resources on demand while providing “performance, scale and cost efficiencies not even achievable by hyperscalers.” This is all made possible by the Fungible DPU, which can offload various functions from the CPU, including bare metal virtualization, software-defined networking, and local storage. The company has raised more than $300 million from investors, including a $200 million Series C round led by the SoftBank Vision Fund.
  • Mythic is targeting Nvidia and other AI chipmakers with its M1076 Analog Matrix Processor, which it says can deliver up to 25 tera operations per second of AI compute while requiring 10 times less power than a typical GPU or system-on-chip solution. The Redwood, Calif.-based startup introduced the M1076 AMP in a variety of form factors for servers and edge devices, saying it can tackle use cases ranging from video analytics to augmented and virtual reality. The launch of the new chip comes after the startup raised a $70 million funding round from Hewlett Packard Enterprise and other investors, bringing its total funding to $165.2 million.
  • Pliops targets data center storage economics with a hardware accelerator it says can “exponentially” improve cost, performance, and endurance for SSD storage. The San Jose, Calif.-based startup earlier this year announced it has raised a $65 million funding round led by Koch Disruptive Technologies, with participation from Intel Capital, Nvidia, Xilinx and Western Digital. The startup says its Pliops Storage Platform has been tested by more than 20 tier-one and enterprise companies, including database software vendor Percona, which said that the “Pliops storage processor is unique in that it is able to increase performance, improve compression and reduce write amplification.
  • Xsight Labs targets the data center switch market with a super-fast, programmable switch it says can meet the power and performance demands of cloud, high-performance computing and AI applications while also providing a flexible and scalable architecture. The Tel Aviv, Israel-based startup announced in March 2021 that it had raised a Series D funding round that was backed by several investors, including Intel Capital, Xilinx, and Microsoft’s venture fund, M12. The startup launched out of stealth mode in December 2020 with the announcement that it is now sampling X1, what it says is the industry’s first switch to offer up to 25.6 terabits per second in speed. The startup says its switch silicon offers these fast speeds at exceptionally low power, with less than 300 watts required for the higher end.
  • SambaNova Systems is targeting revolutionizing an integrated approach to AI computing with hardware, software and services that takes advantage of the startup’s Reconfigurable Dataflow Unit chip. The Palo Alto, Calif.-based startup announced it had raised a $676 million funding round led by SoftBank Group that also included the venture arms of Google and Intel. The startup is using the funding to grow market share against Nvidia and other competitors with its subscription-based Dataflow-as-a-Service AI platform, which relies on SambaNova’s RDU-based DataScale system to deliver what it says are “unmatched capabilities and efficiency” for AI.
  • SiFive is using an open-source alternative to target Arm’s CPU design business with core designs and custom silicon solutions for AI, high-performance computing and other growing markets based on the open and free RISC-V instruction set architecture. The San Mateo, Calif.-based startup has recently received takeover interest from multiple parties, including Intel, which has offered $2 billion to acquire the startup. Before the reported takeover interest, SiFive announced that Intel’s new foundry business, Intel Foundry Services, will manufacture processors using SiFive’s processor designs. Last August, the startup raised a $61 million Series E funding round led by SK Hynix, with participation from several other investors, including Western Digital Capital, Qualcomm Ventures, and Intel Capital. A month later, the company appointed former Qualcomm executive Patrick Little as its new CEO.
  • Tachyum is targeting Intel, AMD, Nvidia and other silicon compute vendors with what it calls the “world’s first universal processor,” which it says can replace the functions of CPUs, GPUs and other kinds of compute processors while providing higher performance and power efficiency. The Las Vegas-based startup announced that customers can now test its Prodigy processor with the company’s FPGA-based emulation system. A four-socket reference design motherboard is expected to be available this year. Tachyum claims that its Prodigy processor can run legacy x86, Arm and RISC-V binaries and outperform Intel’s fastest Xeon processors across data center, AI and high-performance computing workloads while consuming 10 times less power. The startup says Prodigy can also outperform Nvidia’s fastest GPU in AI and HPC workloads.
Top 10 Si Enterprises as August 2022
Share Button

Working to deliver Wireless Innovation in the PRC

My firm has been working in delivering technology innovation to our PRC-based clients now for 16 years.  One key thing we do is tie all of our initiatives to the PRC’s  13th Five Year Plan (Click to see a good slide deck).  The technology section is key in order to understand the PRC government’s motivations and incentives to migrate it’s workforce from a manufacturing culture to an advanced technology juggernaut.

Some investor strategist believe that one way to “Trump-Proof” a technology enterprise’s investments in China is to focus on the current eCommerce explosion.

Michael Robinson just wrote, “ E-commerce growth pivots off of China’s fast-growing middle class, which now accounts for 19% of the population, according to Goldman Sachs. That figure has been rising by more than a percentage point each year – and should keep growing at that rate for years to come.
The rapid growth in smartphone use is another key factor. There are now nearly 400 million of these e-commerce-enabling devices in China, a figure that’s rising at a double-digit pace. And mobile-based spending – that is, purchases made via a smartphone or tablet instead of a PC or laptop – now accounts for 50% of all Chinese e-commerce, says eMarketer.

eCommerce from any smart device

That compares to just 22% here in the United States.
This growth in e-commerce is coming as China’s middle class continues to rise and the nation shifts away from its focus on exports and places a greater emphasis on domestic spending.”

My strategy is to provide engineering tools that will facilitate the enablement of innovative wireless

save energy, better connections

technologies E.G., Active Steering Antennae for the large number of EE and SW designers creating mobile eCommerce enabled devices in the PRC.

Share Button

Microsoft pays $26.2B buying LinkedIn… Too much????

Is $26.2 Billion too mush?
Is $26.2 Billion too much?

Microsoft seems to be paying a lot to buy LinkedIn if you just look at the “P&L” side of the deal.

But if you examine the long term MSFT strategy of making Office 365 the pervasive business and social platform for documents, presentations and spreadsheets, then it makes the deal worth the investment cost.NewConstructs_LNKD_DeterioratingMargins_2016-06-13

Share Button

Windows 10 Review from BI Insiders

Great article on Windows 10 here.  It goes at Windows 10 from an in-depth analysis perspective and is quite complete.

Win10_Windows_StartScreen2_Web

 

 

At last - A True Ubiquitous Platform
Is it Possible? – A True Ubiquitous Platform

Share Button

Polyglot VS RDMS VS NoSQL… Which is RIGHT???

In doing work for a local healthcare product venture, I was asked to look at the network and database requirements to support mixed content transactions, video streaming all while conforming to HIPAA compliance standards.  As a part of this work, I developed a Web Services Cloud-based architecture that took into account, ERH, HL7, document management and provider notation.

Polyglot DB for the Web
Polyglot DB used  for the Web

 

This tasking led me to a deep dive on the data architecture and DB requirements analysis that was required to develop the architecture.

The question of utilizing standard RDBMS (SQL) VS NoSQL was an immediate consideration.  My conclusion…. It depends on a large number of technical, business and regulatory factors to derive the appropriate architectural answers. For example, what other external systems are interfaced to the applications and how do they require interaction? In general with the prolific growth in web services, mobile and cloud computing today’s enterprise will require a polyglot data architecture to satisfy all stakeholders.

anette-ashertrend-in-medical-informatics-24-1024 trusted health clouds
See www.2gls.com/ceo.php posted by Anette Asher

 

A look at Healcareinformatics provides an operational insight into some of the complexities.

“Healthonomics” can be the key driving factor to trigger enterprise decisions to support multiple types of DB solutions woven together in a heterogeneous way delivering a network of web services that affect healthcare outcomes.

OPs considerations for Healthcareinformatics
OPs considerations for Healthcareinformatics

Share Button

4K Video’s significant impact on; info consumers, platform vendors and content providers

As we start to see the uptake in 4K video content, suppliers of CPUs, NIC (Network Interface Cards), networks (LAN, WLAN, Wi-Fi) and storage technologies will all be struggling to “step up to the plate” in meeting the challenges of this disruptive Video format.  Also IAAS platform providers will face huge challenges to configure cloud components that can be rapidly provisioned  for 4K content or Video Streaming.  Even the security industry will be affected regarding the video surveillance infrastructure (see this Video Security Magazine article).

SD VS HD VS 4K
SD VS HD VS 4K

This is a Technologies Strategic Directions “Sleeping Inflection Point” for multiple industries, manufacturers, eworkers  and information consumers.

Ultra-high definition (UHD) resolution is 3840×2160 Pixels now used in displays and broadcast., This does not equal 4K (4096×2160 Pixels) used in digital cinema. People tend to used them interchangeably but there is a significant difference in impact on the networking bandwidth required to service consumption of 4K.

We all are aware from a display technology perspective that TVs are now offering this content.  However, how about other network and computer infrastructure components?  When will they be able to handle  the disruptive impact of 4K?

Share Button

International Consortium Announces New Chip Technology

A blessing or curse to the industry?
A blessing or curse to the industry?

This week IBM, Samsung, New York State, and Global Foundries announced a new high capacity silicon chip made with a combination of Silicon and germanium.

Are IBM et al, leading us in the right direction? As the width of connections on chips reach the atomic diameter of the individual atoms of the silicon connectors, EUV etch stations and change in deposition technology are just he tip of the CAPEX impact required to transition and follow the consortium’s lead.  At approximately $2B per FAB cost in the near future, who can afford to follow? What ripples in the ecosystem of silicon equipment manufacturing will this cause and at the commodity pricing of today’s market can the ASPs tolerate this new move? Even though Intel mentions 7-Nano occasionally there seems to be no defined roadmap to get there. Consortiums and research are good things. However, we now have to figure out practical steps to get to the future the consortium has described.

Share Button

Debate over Public Cloud TCO is mostly Linux noise

A recent article of an interview with the Red Hat CEO touts the benefits of private cloud implementation. See it at HERE

Maybe your next Server won't be collocated with you!
Maybe your next Server won’t be collocated with you!

 

Public

VS

Private

TCO???

 

This debate is usually short sited and doesn’t include all CAPEX & OPEX cost associated with the “Free OS” type of cloud operations.  Also the reusable components from more sophisticated partner communities afford both AWS & AZURE much greater long term valuations when responsible Enterprise accounting methods are used to drive the cost benefits analyses.  The proper engineering of a cloud infrastructure which includes smart VMs well orchestrated by business-demand-level-driven auto scaling will always push the TCO/ROI argument to a public solution for large scale systems.

Microsoft actually has a TCO tool that they can use to estimate TCO of on-premises vs. Azure. There are many considerations when comparing costs of running an on-premises datacenter with full infrastructure, servers, cooling, power etc to a cloud-based service like Azure where you pay a cost based on the services consumed such as storage, compute and network egress. It can be difficult to know exactly what typical costs are for your datacenter and what the costs would be for services running in Azure. Microsoft has a pricing calculator available at http://azure.microsoft.com/en-us/pricing/calculator/ which can help access costs for Azure services and a VM specific calculator at http://azure.microsoft.com/en-us/pricing/calculator/virtual-machines/.

When running on-premises, you own the servers. They are available all the time which means you typically leave workloads running constantly even though they may actually only be needed during the work week. There is really no additional cost to leave them running (apart from power, cooling etc). In the cloud you pay based on consumption which means organizations go through a paradigm shift. Rather than leaving VMs and services running all the time companies focus on running services when needed to optimize their public cloud spend. Some ways that can help optimize services running are:

  • Auto-scale – The ability to group multiple instances of a VM/service and instances are started and stopped based on various usage metrics such as CPU and queue depth. With PaaS instances can even be created/destroyed as required
  • Azure Automation – The ability to run PowerShell Workflows in Azure and templates are provided to start and stop services at certain times of day making it easy to stop services at the end of the day then start them again at the start of day
  • Local Automation – Use an on-premises solution such as PowerShell or System Center Orchestrator to connect to Azure via REST to stop/start services
Share Button

Mobile World is meeting in Barcelona in March

Over the course of four days, 2-5 March 2015, Mobile World Capital Barcelona will host the world’s greatest mobile event: Mobile World Congress.   See this website for more info: http://www.mobileworldcongress.com/

The mobile communications revolution is driving the world’s major technology breakthroughs. From wearable devices to connected cars and homes, mobile technology is at the heart of worldwide innovation. As an industry, we are connecting billions of people to the transformative power of the Internet and mobilising every device we use in our daily lives.

In short, the hole world is on The Edge of Innovation, and the possibilities are endless. The 2015 GSMA Mobile World Congress will convene industry leaders, visionaries and innovators to explore the trends that will shape mobile in the years ahead.

About the Event

Here are the components that make up this industry-leading event:

  • A world-class thought-leadership Conference featuring visionary keynotes and panel discussions
  • A cutting-edge product and technology Exhibition featuring more than 1,900 exhibitors
  • The world’s best venue for seeking industry opportunities, making deals, and networking
  • App Planet, the Centre of the Mobile Apps Universe, where the mobile app community gathers to learn, network and engage with innovators
  • Global Mobile Awards programme, where we recognise industry innovation and achievements
  • And, all MWC pass holders can attend 4YFN, an event focused on startups, corporations, and investors

 

Share Button