These questions arise throughout analytics, data mining, and simulation. The simplest way to run these applications is to run them on a single server with a single memory space and one or more processors. However, when an application’s memory requirements exceed the available memory the application may not run. The option to “Scale Up” to ever larger servers is often too expensive, so the industry has evolved to a “Scale Out” approach involving newer technologies to help users slice their problems into pieces small enough to run on affordable servers. Users scale out by simply adding more servers. But, there are often significant operational and development costs to do this.
TidalScale offers the best of both worlds: it is both user-friendly like Scale Up and grows at linear cost like Scale Out.
The TidalScale proprietary HyperKernel binds multiple commodity servers into a single virtual machine that can host systems like Linux, FreeBSD, and Windows. It meets Big Data requirements with a virtual supercomputer able to run unmodified applications on entire unmodified data sets, all on commodity servers.
Aggregate compute resources for large scale in-memory analysis and decision support
Scale like a cluster using commodity hardware at linear cost
Allow customers to grow gradually as their needs develop
Dramatically simplifies application development
No need to distribute work across servers
Existing applications run as a single instance, without modification, as if on a highly flexible mainframe
Automatic dynamic hierarchical resource optimization
Applicable to modern and emerging microprocessors, memories, interconnects, persistent storage & networks