In doing work for a local healthcare product venture, I was asked to look at the network and database requirements to support mixed content transactions, video streaming all while conforming to HIPAA compliance standards. As a part of this work, I developed a Web Services Cloud-based architecture that took into account, ERH, HL7, document management and provider notation.
This tasking led me to a deep dive on the data architecture and DB requirements analysis that was required to develop the architecture.
The question of utilizing standard RDBMS (SQL) VS NoSQL was an immediate consideration. My conclusion…. It depends on a large number of technical, business and regulatory factors to derive the appropriate architectural answers. For example, what other external systems are interfaced to the applications and how do they require interaction? In general with the prolific growth in web services, mobile and cloud computing today’s enterprise will require a polyglot data architecture to satisfy all stakeholders.
A look at Healcareinformatics provides an operational insight into some of the complexities.
“Healthonomics” can be the key driving factor to trigger enterprise decisions to support multiple types of DB solutions woven together in a heterogeneous way delivering a network of web services that affect healthcare outcomes.
As we start to see the uptake in 4K video content, suppliers of CPUs, NIC (Network Interface Cards), networks (LAN, WLAN, Wi-Fi) and storage technologies will all be struggling to “step up to the plate” in meeting the challenges of this disruptive Video format. Also IAAS platform providers will face huge challenges to configure cloud components that can be rapidly provisioned for 4K content or Video Streaming. Even the security industry will be affected regarding the video surveillance infrastructure (see this Video Security Magazine article).
This is a Technologies Strategic Directions “Sleeping Inflection Point” for multiple industries, manufacturers, eworkers and information consumers.
Ultra-high definition (UHD) resolution is 3840×2160 Pixels now used in displays and broadcast., This does not equal 4K (4096×2160 Pixels) used in digital cinema. People tend to used them interchangeably but there is a significant difference in impact on the networking bandwidth required to service consumption of 4K.
We all are aware from a display technology perspective that TVs are now offering this content. However, how about other network and computer infrastructure components? When will they be able to handle the disruptive impact of 4K?
A recent article of an interview with the Red Hat CEO touts the benefits of private cloud implementation. See it at HERE
This debate is usually short sited and doesn’t include all CAPEX & OPEX cost associated with the “Free OS” type of cloud operations. Also the reusable components from more sophisticated partner communities afford both AWS & AZURE much greater long term valuations when responsible Enterprise accounting methods are used to drive the cost benefits analyses. The proper engineering of a cloud infrastructure which includes smart VMs well orchestrated by business-demand-level-driven auto scaling will always push the TCO/ROI argument to a public solution for large scale systems.
Microsoft actually has a TCO tool that they can use to estimate TCO of on-premises vs. Azure. There are many considerations when comparing costs of running an on-premises datacenter with full infrastructure, servers, cooling, power etc to a cloud-based service like Azure where you pay a cost based on the services consumed such as storage, compute and network egress. It can be difficult to know exactly what typical costs are for your datacenter and what the costs would be for services running in Azure. Microsoft has a pricing calculator available at http://azure.microsoft.com/en-us/pricing/calculator/ which can help access costs for Azure services and a VM specific calculator at http://azure.microsoft.com/en-us/pricing/calculator/virtual-machines/.
When running on-premises, you own the servers. They are available all the time which means you typically leave workloads running constantly even though they may actually only be needed during the work week. There is really no additional cost to leave them running (apart from power, cooling etc). In the cloud you pay based on consumption which means organizations go through a paradigm shift. Rather than leaving VMs and services running all the time companies focus on running services when needed to optimize their public cloud spend. Some ways that can help optimize services running are:
Auto-scale – The ability to group multiple instances of a VM/service and instances are started and stopped based on various usage metrics such as CPU and queue depth. With PaaS instances can even be created/destroyed as required
Azure Automation – The ability to run PowerShell Workflows in Azure and templates are provided to start and stop services at certain times of day making it easy to stop services at the end of the day then start them again at the start of day
Local Automation – Use an on-premises solution such as PowerShell or System Center Orchestrator to connect to Azure via REST to stop/start services
I’ve been digesting and expanding on an interesting white paper authored by the Microsoft Azure Incubation team titled: Building the Internet of Things– Early learnings from architecting solutions focused on predictive maintenance. I agree with the premise tha ubiquity in connection technology will be the key enabler and that predictive maintenance will probably be required to instantiate a true global ubiquitous connection state. Recently MS is changing their terminology from Internet of Things to Internet of Everything (IoE). Here I use them interchangeably A key technical enabler of the Internet of Things (IoT) is ubiquitous connectivity. A week or so ago I blogged about a new technology called Active Steering(TM) which should be the winner in patented connectivity hardware/Software/Firmware for antenna products.
Just imagine that the antenna on your device was constantly sampling the wireless signals around your location and finding the strongest source and directing the focus of the antenna to that source. That is what an Active Steering antenna does on your phone, tablet or PC. By using this technique the system is also performing predictive maintenance on the connectivity configuration for your specific device and location. Let’s first look at the Open Systems Interconnection (OSI) model. Even though the Internet model uses a simplified abstraction, the models in the previous figure and the associated well-known logical protocols are comparable. Application-layer protocols are not concerned with the lower-level layers in the stack other than being aware of the key attributes of those layers, such as IP addresses and ports. The right side of the figure shows the logical protocol breakdown transposed over the OSI model and the TCP/IP model.
Special-purpose devices differ not only in the depth of their relationship with back-end services, but in the interaction patterns of these services when compared to information-centric devices because of their role as peripherals. They are not the origin of command-and-control gestures; instead, they typically contribute information to decisions, and receive commands as a result of decisions. The decision-maker does not interface with them locally, and the device acts as an immediate proxy; the decision-maker is remotely connected and might be a machine. We usually classify interaction patterns for special-purpose devices into the four categories indicated in the following figure.
All of these models need uninterrupted connectivity to enable the ultimate user experience that Windows 10 could offer with the addition of Active Steering Technologies at the Platform level.
My firm has been engaged by one of the world’s largest small appliance manufacturing PRC-based companies to architect and implement a cloud/mobile/appliance IoT offering. This new small wine appliance will be launched in Q4 of 2014.
In fact this is an exciting project where WilQuest is Partnering with Microsoft, interKnowlogy, Tridea Partners and others to create a “Cloud of Things” CoT infrastructre where a global software/hardware engineering team is developing products on Azure,Windows 8, Android, iPhone, iPad, Intel and ARM platforms to create a seamless web srevices orchestration of devices and applications that each perform a segment of a task that the end user request via gesture/mouse/keyboard action.
There’s a lot of buzz stating that Cloud Computing and Big Data are synonymous. See a Forbes article here stating that ”
“Big data is the new cloud computing.”
This sentiment was recently expressed in an interview with Motley Fool analyst Tim Beyers, who analyzed the zeitgeist coming out of the South-by-Southwest (SXSW) conference and observed that cloud computing and big datawere now one in the same phenomena, converging on enterprises of all shapes and sizes.”
For those who don’t know what big data is, this Intel Video gives you a “Big Data 101” primer.
The Cloud definitely provides a cost effect and timely way to go after big dataproblems and then using the elasticity of the cloud’s IaaS foundation, dump the costly resources when you’ve finished or allow them to grow only when needed. But they are not one and the same. Big Data is just the current “Bell of the ball” for enterprise usage of the Cloud. See below Gartner’s Hype Cycle for emerging technologies 2012. This shows we are either in or approaching the “Trough of Disillusionment” regarding Reduce Map and the offerings of DBSaaS. I’m eager to apply some innovative ideas I have regarding the trip out of this trough on some upcoming projects.
The dramatic growth in smartphone, tablet and vertical market portable devices e.g., medical instrumentation is starting to drive major change at big tech companies. If you watch product offerings and new positioning of Google, Microsoft, and Apple, you’ll see that significant investments are geared toward the mobile consumer and mobile information worker. These products require new device technologies such as flexible silicon and Thin flexible substrates for interconnect technology.
A good example of this is the lighting fast reorganization of Intel after Brian Krzanich’s installation as CEO. Under Otellini’s tenure Intel missed a huge opportunity to become the chip supplier to Apple for iPhones even though the traditional conservative “number crunching/data driven” advice given to Paul Otellini went against his gut, Intel passed on the opportunity. Their analysis misjudged the potential volume by a factor of 100 and over estimated the costs of manufacturing. Basically the conservative mindset of “group think” there projected the iPhone as a losing business proposition. See here The new CEO has immediately reorganized the global enterprise to make it more agile and created a New devices Group reporting directly to him. See here
Hopefully this will open Intel up to address new markets and new types of Si architecture along with manufacturing processes. Also the industry will hopefully follow Intel’s lead and innovate even more in this hot technology domain When you look at flexible silicon and thin film technologies, the future is clear. New companies will grow to tech giants that embrace this technology and benefit from lessons learned from the old tech giants.
The influx of companies trying to exploit “Big Data” as a new revenue source has provided a number of workforce challenges for senior managers. Do they hire very smart math folk to devise new algorithms and create a “secret sauce” for their products? Do they develop or acquire superiors hardware that leverages new Si technology to better process big data? Do they form teams that have practical business experience to ferret out which real problems exist in the marketplace and what approach to analytics will be truly appreciated by customers’ end users?
Well the answer is … a little of each! The most important thing many companies are missing today is that they focus on the technology and technologist in their hiring decisions but not the business logic experience. There is great value in having teams of technology folk embedded with thought leadership coming from experience. Bright, eager, smart people, with minimal experience know theory and math but don’t know human behavior in business. They also don’t have the understanding of the technology assimilation hurdles that form huge barriers to rapid adoption and market share growth. The targeted customer base will often need help understanding:
How much data do we have?
What is actionable information contained in the big data fog?
How much information do we need to make decisions?
What changes in data are significant and require action?
What is a practical “on ramp” to use big data technology?
The bottom line is an integrated team of smart technologist stewarded through development with experienced thought leadership will result in the “BIG THEORY” required to make big data solutions palatable and easily digested by the human organism we call an enterprise. Where in reality, meaningful mobile visualization transforms BIG DATA into actionable information.
As enterprises come to grips with Cloud Computing demands (both internal and external) the IT groups will soon realize that the Hybrid model is the “best fit” for the new Enterprise IT organization. This will also force a closer alignment with various business units and provoke a rethink of the costing models for IT. can IT really stay a coast center given the inevitable variable demand curve of Cloud Services? Enterprise IT shops will consider various vendors (E.G., Azure, HP, VMware, Amazon & others) in light of the matrix created by matching customers service type needs to flexibility of leveraging a vendors Cloud Service offerings to suit the enterprise’s complex business needs. the ease of entrance and exit will be the driving forces behind vendor selection not just cost but ease of achieving true operational excellence.
Finance, Corporate Strategy, Biz Units and IT will collaborate to determine which “flavor” of Cloud Services are needed. For example the SaaS, IaaS or PaaS models may all be needed in the view of the business objectives. The decision of what kind of service offerings to implement will drive IT’s customers to do a functional decomposition of existing applications and distil what services are used today. This will lead to an “applicability analysis” of which type of Cloud implementation makes good business sense. Some may choose from Cloud Platform as a Service, Cloud Infrastructure as a Service, Cloud as a Software Service model. These may also include convent “off ramp & on ramp” strategies to allow customers to switch as circumstances dictate. An example of the choices is illustrated below:
Social Media Marketing: Big corps will finally get serious because they’ll find a way to monetize the involvement with Social Media. So look for PR/Marketing plans to have integrated social media buys/plans
$3B annually is spent on Mobile ads but that is only 5%. Look for the trend setter to be advertising on tablets.
Enterprise Gamification across marketing platforms, enterprise employee training platforms and second-screen applications. In 2013 we will see enterprise gamification Surpass Consumer Gamification.
“So-Mofying”. Lots of VC money going into technology that makes enterprises more social & mobile. An inflection point will be created in the need to download these apps.
Mobile Retail: Technology drives in-store and out-of-store shopping experiences, in-store navigation experiences, and social and mobile sharing of products.
Set-Top Boxes In 2013: This technology goes through drastic changes. Content creator/owners and providers are try to increase engagement across channels and provide new UI application linkages to allow switching of context between VOD and programmed content.