Tag Archives: Deep Dive

ScaleIO: Deep Dive on Imperative Deployment

By now you probably have read the blog post, ScaleIO Framework v0.3: Deploy This!, where we announced the new version of the ScaleIO Framework. (If you haven’t, I would definitely go check it out first.) In that release, a new feature called Imperative Deployment was unveiled which is the first structured method for deploying ScaleIO into your Apache Mesos cluster. In this blog post, we are going to do a deep dive for that feature and highlight some of the interesting and cool things that Imperative Deployment brings to this release.

Let’s Kick this Off

The first thing we should point out when it comes to ScaleIO is that you need a strategy when it comes to how you want to deploy it. Since ScaleIO is flexible and allows for infinite possible combinations, each one of those combinations has pros and cons. So it turns out that the marketing material that makes ScaleIO super easy to use glosses over the fact that there is actually a set of best practices that you need to adhere to get the most out of ScaleIO.

We are going to tackle various ScaleIO deployment scenarios in a series of installment blogs and our first topic will discuss environments for demos, dev/test, and smaller configurations. In this type of configuration, a fully distributed or hyper-converged deployment might be best to roll out since you are dealing with a relatively small number of systems. Demo and dev/test environments are trivial as it “just needs to work” and performance is an afterthought. So let’s take a look at a real world hyper-converged configuration. It goes without saying that you want to have at least a 3-node Mesos master quorum to tolerate failure. For the ScaleIO MDM nodes (Primary, Secondary, and TieBreaker), we will make use of the 3 nodes used for the Mesos masters. Then for the compute, we will have 16 Mesos Agent nodes configured each with a single 2TB drive. This configuration must have already been pre-created prior to deploying the ScaleIO Framework.

Mesos Configuration

To deploy ScaleIO using the Framework’s Imperative Deployment feature, you would define similar Mesos Agent attributes as mentioned in the “Deploy This!” blog article. Before we begin, it is important to understand what the scaleio-sds and scaleio-sdc attributes really mean. The scaleio-sds represents the protection domains and storage pools that will be created on ScaleIO and which disks/devices will be contributed to that domain/pool combination. The scaleio-sdc represents the protection domains and storage pools to which that particular node will provision and consume ScaleIO volumes from. So very simply put, the difference between sds and sdc is sds is the server configuration of the disk/devices offered up to ScaleIO and sdc is the client configuration to consume volumes from ScaleIO.

The Imperative Configuration

So in the 3 Mesos Master + 3 ScaleIO MDMs and 16 Mesos Agent node configuration defined above, if you had the 2TB drives installed at /dev/xvdf for each node (can be verified using fdisk), your Mesos Agent node’s attributes would look like the block below. Note that any changes to your Mesos Agent attributes will require a restart of your Mesos Agent service before deployment of the Framework.

# cat /etc/mesos-slave/attributes/scaleio-sds-domains
mydomain

#cat /etc/mesos-slave/attributes/scaleio-sds-mydomain
mypool

# cat /etc/mesos-slave/attributes/scaleio-sds-mypool
/dev/xvdf

# cat /etc/mesos-slave/attributes/scaleio-sdc-domains
mydomain

#cat /etc/mesos-slave/attributes/scaleio-sdc-mydomain
mypool

Now a few things should be noted. It might be wise to use more meaningful names than mydomain or mypool. If this was for the Quality Engineering department, maybe mydomain can be replaced with engineering and mypool with qe. The next thing is this assumes all devices are configured at /dev/xvdf but depending on your storage controller, the drive might be at /dev/xvdg for example, so replace it with the discovered or assigned value. Lastly, since REX-Ray currently only supports provisioning volumes from ScaleIO on a single protection domain and storage pool, we could omit the definition of any /etc/mesos-slave/attributes/scaleio-sdc attributes. There exists code such that the last defined scaleio-sds domain and pool are automatically used for the scaleio-sdc components. When REX-Ray implements multi-domain/pool capabilities, this code will likely be deprecated.

Finally, let’s assume that we know for certain that all the disks/devices are attached to /dev/xvdf because the initial setup was performed using your favorite DevOps tool or you are in AWS (/dev/xvdf happens default when you add your first disk), you could have deployed based on the ScaleIO Framework’s Single Global Pool method which would automatically attached all unused (ie without a filesystem) disks on the 16 Mesos Agent nodes. The default protection domain and default storage pool names can be overwritten with meaningful names using the configuration options -scaleio.protectiondomain=engineering and -scaleio.storagepool=qe. The end results of both methods in this particular case would have been identical.

Huge Mistake

This appears to be simpler than the Imperative deployment, why don’t we just use the Single Global Pool method all the time? First, keep in mind that only a single protection domain and single storage pool can be created. You may want to have more that one and in that case, you must use Imperative Deployment (Example Below). Second, if you have disks without a partition that you want to allocate for some other function like additional local storage, the Single Global Pool method will automatically consume and contribute that disk/device to ScaleIO. Warning: This includes Agent nodes to be added to the cluster for expansion! Defining these attributes for new nodes to be on-boarded to the cluster yields an explicit configuration and without these attributes, newly on-boarded nodes will contribute all disks/devices presented to that node based on the -scaleio.protectiondomain and -scaleio.storagepool configuration options.

An example of multiple StoragePools. Maybe Mesos Agent nodes 1-8 are defined like:

# cat /etc/mesos-slave/attributes/scaleio-sds-domains
engineering

#cat /etc/mesos-slave/attributes/scaleio-sds-engineering
qe

# cat /etc/mesos-slave/attributes/scaleio-sds-qe
/dev/xvdf

And Mesos Agent nodes 9-16 are defined like:

# cat /etc/mesos-slave/attributes/scaleio-sds-domains
engineering

#cat /etc/mesos-slave/attributes/scaleio-sds-engineering
development

# cat /etc/mesos-slave/attributes/scaleio-sds-development
/dev/xvdf

What’s next?

A piece of functionality that is currently being worked on is Fault Sets. This will allow one to specify which nodes can fail without data loss. This will naturally allow for advanced configurations for ScaleIO and happens to be the target for the next blog article in this series.

Further down the road, there are plans to work on a Declarative Deployment option which kind of sits between the simplicity of the Single Global Pool and the explicit Imperative Deployment methods. By providing more abstract constructs, your end result will yield deployment of bigger configurations without getting into the weeds of managing what devices belong to what protection domain or storage pool.

Be sure to check out the ScaleIO Framework project on GitHub and visit the {code} labs page to test drive this feature. All feedback is welcome!

Looking Back at EMC World 2016

Wow! How quickly a week can go by. Like many of you, EMC World 2016 was my first time in attendance and it also happen to be the first time I have been given the opportunity to be a presenter for a larger audience. I though the experience exceeded my expectations and based on some of the preliminary numbers and feedback that we have been getting on the sessions the EMC {code} team had presented, a good number of you agree the sessions content and presentations were of value to you. Thanks again for attending the sessions and providing your feedback.

Couldn’t make it this year?

For those that couldn’t make it out this year, a number of people in {code} have starting posting the materials and slide decks for our sessions. Official EMC World slide decks should be posted in the coming weeks, but there have been a large number of requests to get a hold of the material ASAP and many of us on the team have been happy to oblige. As for my sessions, you can find the materials below.

Introduction To Mesos & Mesosphere

Here is the session material for Introduction To Mesos & Mesosphere (Monday May 2 at 8:30) which just as the title says is a Apache Mesos 101 type session.

You can download the “Introduction To Mesos & Mesosphere” powerpoint presentation HERE. The video of the demonstration used at the end of the session highlighting Mesos using persistent external storage can be found on YouTube below:

The source code for the MVC Web Application written in Golang can be found in my GitHub repo. The two projects used in demo were RestServer and RestClient.

To launch the MVC Application with external persistent storage, you first need to have each of your Mesos Agent/Slave nodes running Mesos DNS and configured for persistent external storage using this Guide. Once you have those prerequisites in your Mesos Cluster, you can find the Marathon JSON files to launch tasks here. To start up the application, perform the following:

Start PostgreSQL:
curl -k -XPOST -d @postgres-mvc.json -H "Content-Type: application/json" YourMarathonIP:8080/v2/apps

Start RestServer:
curl -k -XPOST -d @restapi.json -H "Content-Type: application/json" YourMarathonIP:8080/v2/apps

Start RestClient:
curl -k -XPOST -d @ui.json -H "Content-Type: application/json" YourMarathonIP:8080/v2/apps

Deep Dive With Mesos & Persistent Storage For Applications

Here is the session material for Deep Dive With Mesos & Persistent Storage For Applications (Tuesday May 3 at 3:00) which covered the importance of Apache Mesos Frameworks and the powerful capabilities that 2 layer scheduling provides in your Datacenter and Mesos cluster.

You can download the “Deep Dive With Mesos & Persistent Storage For Applications” powerpoint presentation HERE. The video of the demonstration used at the end of the session highlighting the Elastic Search Mesos Framework using persistent external storage can be found on YouTube below:

To launch the Elastic Search Framework with external persistent storage, you first need to have at least a 3 Agent/Slave nodes in your Mesos cluster and each of your Mesos Agent/Slave nodes configured for persistent external storage using this Guide. To start the ElasticSearch scheduler, you can find the Marathon JSON files to launch task here. To start up the Scheduler, perform the following:

Start ElasticSearch Scheduler:
curl -k -XPOST -d @elasticsearch.json -H "Content-Type: application/json" YourMarathonIP:8080/v2/apps

If you want to run some of the advanced ElasticSearch functionality used in the demo, you can find additional information in this file here.

What’s Next…

After recharging for a bit, we have already started on our post-EMC World plans and deliverables. Hopefully this will bring a forth a bunch of interesting ideas and projects for the community. To keep up to date with the things that I will be working on, please follow me on Twitter at @dvonthenen. If anyone has any questions about the EMC World presentations, you can always catch me on the {code} Community Slack channel.

Getting Ready for EMC World 2016

We are getting closer and closer to EMC World 2016. I have to admit, its approaching crazy fast. This will be my first time attending EMC World. Seems odd saying that as I have attended many conferences in my career, but never the one my company throws. This time its going to be a different conference going experience as I will be presenting two sessions in the “Code and Modern Operations” track this year. I am very excited for this opportunity to talk about things that are interesting to me and I hope that are of interest to others out there in the open source community.

Apache Mesos

The first session is Introduction To Mesos & Mesosphere. This is basically an Apache Mesos 101 type session with a focus on the company, Mesosphere, whom pushes the direction of Mesos. I will be co-presenting this session with Somik Behera from Mesosphere. For those that haven’t heard about Mesos or looking to learn more about it, this is a excellent session outlining why Mesos is among the best workload schedulers in the datacenter and why its the preferred choice among companies looking to scale their applications. You can catch us both Monday morning (May 2) at 8:30am. Yes, you read that right. Going to be difficult for people to drag themselves out of bed that early in the morning… this is Las Vegas after all. Hope you all can make it!

Deep Dive

My second session is Deep Dive With Mesos & Persistent Storage For Applications. After getting some time to digest the information in my previous Mesos 101 session, this will dive into some of the internals of Mesos as we explore Mesos Frameworks and 2 layer scheduling. We will discuss what 2 layer scheduling means and how external storage can enhance the story around applications leveraging Frameworks. For the architects, operators, and consumers of Mesos, this session is packed with things you need to know in order to make your applications function efficiently and be highly available in order to avoid the train wreck. Ultimately the goal of talk is to enable you to put things on autopilot so you don’t need to manage the application.

Train Wreck

If you haven’t purchases tickets for EMC World, I would highly recommend you do as soon as possible. The EMC {code} team has a huge presence this year as we have our own booth along with our own session track, Code and Modern Operations, which I eluded to earlier. The {code} team collectively has 21 sessions at the conference this year talking about everything from Docker, managing larger open source communities, Mesos, and contributing to open source just to name a few. I will have a follow up blog post just before EMC World highlighting some of the other sessions you might want to check out. Catch you all later!