VDC sizing and design

Intro

One of the best definitions of an “information system” with a practical approach rather than an academic approach is – an automated system that produces data results for further usage. An algorithm (as an engine for any information system) is a rule to transform input data to output data. So fundamentally any information system is transforming input data to output data. We can even say that’s the sole reason for an information system to exist; therefore the value of an information system is defined through the value of the data. So any information system design starts with data and implements algorithms, hardware and everything else required to deliver data with known structure and value.

Prerequisites

Stored data

We start the design of an information system with data. First of all, we document all planned datasets for processing and storing. Data characteristics include:

  • the amount of data
  • data lifecycle (amount of new data per period, data lifetime, rules of processing outdated (dead) data)
  • data classification with relationship to core business from availability / integrity / confidentiality perspective including financial KPIs (like the financial impact from data lost over the last hour)
  • data processing geography (the physical location of data processing hardware)
  • external requirements for each data class (personal data laws, PCI DSS, HIPAA, SOX, medical data laws, etc).

Information systems

Data is not only stored, but also processed (transformed) by information systems. So the next step is to create a full inventory of all information systems, their architectural traits, interoperability and relationship, hardware requirements in abstract resources: Continue reading

What will virtualization look like in 10 years?

Let’s define “virtualization” for the start. Virtualization is abstraction from hardware itself, hardware microarchitecture. So, when we talk about virtualizion – it’s not just about server hypervisors and VMware virtual machines. Software defined storage, containers, even remote desktops and application streaming – all of these are virtualization technologies.

As of today, mid of 2016, server virtualization is almost stalled in progress. No breakthroughs for last several years. Honing hypervisors for perfection, challengers follow the lead and difference is less and less with each year. So vendors fight now in the ecosystem – management, orchestrators, monitoring tools and subsystems integration. We are now surprised when someone wants to buy a physical server and install Windows on it instead of hypervisor. Virtual machines are no longer an IT toys, it’s an industrial standard. Unfortunately sensible defense
scheme (from backups to virtualization-aware firewalls) is not yet standard feature.

Software Defined Everything, or we can say Virtualized Everything, grow enormously. Most of the corporate level storage systems are almost indistinguishable from standard x86 servers except of form factor. Vendors do not use special CPUs or ASICs anymore, putting powerful multicore Xeons in controllers instead. Storage system of today is actually just a standard server with standard hardware, just with a lot if disks and some specialized software. All the storage logic, RAIDs, replication and journaling is in software now. We blur the storage/server border even more with smart cache drivers and server side flash caches. Where the server ends and storage begins?

From other side we see pure software storage systems, never sold as hardware, which do not have hardware storage heritage and architecture traits. Take any server, put some disks and RAM as you please, install an application and voila! You lack space, performance or availability? Take another server, and another and maybe a couple more. It begins to look even more interesting when we install storage software in virtual machine or even make it a hypervisor module. There are no servers and storage apart – this is a computing/storage unified system. Hyperconverged infrastructure we call it. Virtual machines are running inside, virtual desktops and servers. More than that, users can not tell if they’re in dedicated VM or terminal server session or is just a streamed application. But who cares when you can connect from MacDonalds just across a globe?

Today we talk about containers, but it’s not a technological breakthrough. We knew about them for years, especially ISPs and hosting providers. What will happen in near future – is a merge of traditional full virtualization and containers in a single unified hypervisor. Docker and their rivals are not yet ready for production level corporate workloads. Still a lot of questions in security and QoS, but I bet it’s just a matter of couple of years. Well, maybe more than a couple, but 10 is more than enough. Where was VMware 10 years ago and where are we now in terms of server virtualization?
Network control plane is shifting more and more towards software, access level switching blurs more and more. Where is your access level when you have 100 VMs switching inside a hypervisor never reaching physical ports? The only part really left for specialized hardware is high speed core switches or ultra-low latency networks like Infiniband. But still, this is just a data plane, control plane lives in the Cloud.

Everything is moving towards the death of general OS as we know them. We don’t really need an OS actually, we only need it to run applications. And applications are more and more shifting from installable to portable containers. We’ll see hypervisor 2.0 as new general OS and further blur between desktop, laptop, tablet and smartphone. We still install applications, but we already store our data in the cloud. In 10 years container with application will be moving between desktop, smartphone and server infrastructure as easy as we move now virtual machines.

Some years ago we had to park floppy drive heads after we’re finished, teenagers of today live with cloud, teenagers of tomorrow will have to work hard to realize what is application/data link to hardware.