In this paper, we overview the motivation, design, and implementation behind AppScale, an open source distributed software system that implements cloud platform as-a-service (PaaS). Our goal with AppScale is to simplify cloud application (app) development and deployment and by doing so to broaden the population of developers who are able to innovate using cloud systems. We enable this by targeting the problem of app portability across clouds and app service/library implementations (functionality common across apps such as data management, search, messaging, tasking, etc.). In particular, AppScale defines a simple, unifying, and open set of APIs based on the de facto public cloud standard of Google App Engine, i.e. apps that execute using Google’s public cloud also do so over AppScale without modification. AppScale then “plugs in” and automatically configures and deploys a number of alternative implementations of the different app services. Moreover, since we make AppScale available as a virtual machine image and a set of deployment tools, users can execute AppScale on-premise or over public cloud infrastructures. Developers can take advantage of the portability AppScale offers to simplify development and deployment of cloud apps, to compare/contrast different cloud services and fabrics without changing their apps or becoming an expert with the constituent technologies, and to investigate and evaluate new cloud platform advances using a rich application and service ecosystem.
The paper will appear in the February/March issue of IEEE Internet Computing.
In this paper we present the design, implementation, and evaluation of a pluggable autoscaler within an open cloud platform-as-a-service (PaaS). We redefine high availability (HA) as the dynamic use of virtual machines to keep services available to users, making it a subset of elasticity (the dynamic use of virtual machines). This makes it possible to investigate autoscalers that simultaneously address HA and elasticity. We present and evaluate autoscalers within this pluggable system that are HA-aware and Quality-of-Service (QoS)-aware for web applications written in different programming languages, automatically (that is, without user intervention). Hot spares can also be utilized to provide both HA and improve QoS to web users. Within the open source AppScale PaaS, utilizing hot spares can increase the amount of web traffic that the QoS-aware autoscaler serves to users by up to 32%.
As this autoscaling system operates at the PaaS layer, it is able to control virtual machines and be cost-aware when addressing HA and QoS. Therefore, we augment these autoscalers to make them cost-aware. This cost awareness uses Spot Instances within Amazon EC2 to reduce the cost of machines acquired by 91%, in exchange for an increase in startup time. This pluggable autoscaling system facilitates the investigation of new autoscaling algorithms by others that can take advantage of metrics provided by different levels of the cloud stack (IaaS, PaaS, and SaaS).
Cloud computing is a service-oriented approach to distributed computing that provides users with resources at varying levels of abstraction. Cloud infrastructures provide users with access to self-service virtual machines that they can customize for their applications. Alternatively, cloud platforms offer users a fully managed programming stack that users can deploy their applications to and scales without user intervention. Yet challenges remain to using cloud computing systems effectively. Cloud services are offered at varying levels of abstraction, meter based on vendor-specific pricing models, and expose access to their services via proprietary APIs. This raises the barrier-to-entry for each cloud service, and encourages vendor lock-in.
The focus of our research is to design and implement research tools to mitigate the effects of these barriers-to-entry. We design and implement tools that service users in the web services domain, high performance computing domain, and general-purpose application domain. These tools operate on a wide variety of cloud services, and automatically execute applications provided by users, so that the user does not need to be conscientious of how each service operates and meters. Furthermore, these tools leverage programming language support to facilitate more expressive workflows for evolving use cases.
Our empirical results indicate that our contributions are able to effectively execute user-provided applications across cloud compute services from multiple, competing vendors. We demonstrate how we are able to provide users with tools that can be used to benchmark cloud compute, storage, and queue services, without needing to first learn the particulars of each cloud service. Additionally, we are able to optimize the execution of user-provided applications based on cost, performance, or via user-defined metrics.
Download the thesis.
We are happy to announce the release of URDME 1.2. New features include support for Comsol 4.1, 4.2 and 4.3. We have also added the ability to compile the solvers independently of Matlab libraries. This will greatly simplify deployment of URDME jobs by StochSS down the line.
To get URDME 1.2: www.urdme.org
Or visit us on GitHub.
We are happy to welcome Gautham Narayanasamy to the StochSS team. Gautham is a master’s student in computer science at UCSB. This quarter, he will work on implementing a UI for constuction of well mixed models.
We are currently in the process of developing the UI, a WSGI app compatible with GAE (we will use AppScale to deploy it). Here is a screenshot of a current proposal, based on the Bootstrap framework.
The RDME will break down in the limit of vanishing voxel sizes, in the sense that contributions from bimolecular reactions will be lost. The problem sets on earlier (for larger voxels), the more diffusion limited the reaction is. This is a problem that has attracted a lot of interest since it was pointed out by Samuel Isaacson in this paper.
Recently, corrections to the bimolecular rates that are explicitly mesh-dependent has been proposed to deal with the problem. Erban and Chapman finds an expression in 3D that works down to a critical size of the mesh.
In this paper, we use a theorem from Montroll to show that there will always be such a critical mesh size for which no local correction to the RDME can make it agree with the Smoluchowski model in the sense that the mean binding time between two particles should be the same in both models. In the limit of perfect diffusion control, we find analytical values for the critical size in both 2D and 3D. Interestingly, the value we find in 3D agrees with the value found by Erban and Chapman. We also discuss the relationship between the local corrections of Erban and Chapman and ours to those derived by Fange et. al.
The first version of StochSS will support well mixed stochastic simulation using solvers from StochKit2 , but the work to incorporate spatial stochastic solvers are starting already now. For the spatial solvers, we will use algorithms from URDME as a computational backend. URDME is a modular framwork with a Matlab frontend, and uses Comsol Multiphysics to define and construct the spatial components of the model. Currently, its main use is as an interactive environment functioning much as a Matlab toolbox. Models are specified using a Matlab/Comsol API.
To make the solvers of URDME readily available to StochSS we are currently working on extending URDME to support open-source software for geometry modelling and meshing, and on a Python API for model specification. This Python interface will also be made available as an alternative frontend to URDME.
Welcome to the StochSS site! StochSS (Stochastic Simulation as a Service)
is our NIBIB-funded project to make discrete stochastic simulation easy and accessable. In the first version of StochSS we are focusing on simulation capabilities for cell biology. Our goal is to enable you to build a model (or models), scale it up to increasing complexity including incorporating spatial dependence, characterization or rare events, and parameter estimation, and explore the parameter space.
StochSS will be available on your desktop as a web app. As your computational needs grow, StochSS will be able to seamlessly deploy the appropriate computing resources as needed, via cloud computing. The foundation of the cloud computing capabilities is the AppScale open source cloud platform, developed by the Krintz group at UCSB. Appscale can link to a variety of commercial clouds, or you can turn your local cluster into a cloud.
Stay tuned for StochSS v1.0, coming soon!
The StochSS team
Linda Petzold, Chandra Krintz, Andreas Hellander, Per Lötstedt, Ben Bales, Bernie Daigle, Sheng Wu, Jin Fu, Chris Bunch, Brian Drawert, Hiranya Jayathilaka, Stefan Hellander