Tag Archives: External Storage

Dell EMC World 2017 – Las Vegas, NV

It looks like that time of year again as we are just days away from Dell EMC World 2017. The {code} team will once again be in attendance and presenting some interesting sessions (16 in total), a Hands-On Lab (ran through it myself and it’s great!), and various materials at the show. The buffet (Yes, we are in Vegas after all!) of information we have lined up is pretty dang awesome! You can find more information about the stuff {code} has going on in our official {code} at Dell EMC World page.

Demos, Demos, Demos

What I wanted to talk about today were the two sessions that I will be presenting at Dell EMC World. The first session called Demos Demos Demos! Containers & {code} is happening on Wednesday, May 10 at 1:30 PM in room Zeno 4602. I will be co-presenting with Travis Rhoden and Vladimir Vivien. Just like the title says this session will have a few slides to set up what is going on and talk about who we are… then it’s nothing but live demos. I think this will be a pretty amazing session that captures what is hot in the container and scheduler space but at the same time, will give you some practical and real-world information to take home with you. Definitely, check this out!

ScaleIO Framework

The second session I will be presenting solo. It’s called Managing ScaleIO As Software On Mesos and is occurring on Thursday, May 11 at 11:30 AM in room Zeno 4602. I floated this idea last year during a session at (the then) EMC World 2016 where I thought it would be cool to be able to treat storage just as another piece of software. Well now its one year later and that idea is a reality now and we are going to talk about and demonstrate the ScaleIO Framework in this session. Many other container schedulers have implementations of this pattern and this concept will change the way how we consume software in the future.

Have fun, but not too much fun!

If you are heading down to Dell EMC World this year, stop by the sessions the {code} team will be presenting at and if you have any questions, feel free to linger around after the presentations to chat. I think this is going to be an awesome conference, do check out some of the social networking opportunities available to connect with some new people, and as always enjoy the show and have fun (but not too much… it’s Vegas after all)!


Applications that Fix Themselves

I know that in my last blog post I said I would be talking (and probably announcing) the FaultSet functionality planned for the next release of the ScaleIO Framework. As all things in the world of technology and software, things don’t always go as planned. So today I am here to talk about some stuff relating to the Framework that will be in my speaking session entitled How Container Schedulers and Software Defined Storage will Change the Cloud at SCaLE 15x this Saturday March 4th at 3pm in Ballroom F of the Pasadena Convention Center.

SCaLE 15x Logo

This new functionality at face value seems straight forward but the implications start to open the doors to next level thought kinda stuff. Ok ok ok. I may have oversold that a little, but the idea itself is still pretty cool and I am super excited to talk about here.

Just make it happen. I don’t care how!

Just this week, I released the ScaleIO Framework version 0.3.1 which has a functionality preview **cough** experimental **cough** for a couple of features that I think is cool. The first feature, although not as interesting, will probably be the most useful immediately to people that want use ScaleIO but was turned off from the installation instructions… starting from a bare Mesos cluster, you can provision the entire ScaleIO storage platform in an highly available 3-node configuration from scratch and have all the storage integrations, like REX-Ray and mesos-module-dvdi, installed automatically.

Easy Street

In case you missed it… without having to know anything about ScaleIO, you can deploy an entirely software-based storage platform that will give your Mesos workloads the ability to persist application data seamlessly, that is globally accessible, and make your apps highly available. This abstracts the complexities of the entire storage platform and transforms it into a simple service where you can simply consume storage from. As far as any user is concerned, the storage platform natively came with Mesos and the first app you deploy can consume ScaleIO volumes from day one. If you want more details on how to make that happen, please check out the documentation.

The Sky is Falling!! Do Something?!?!

I think the second functionality preview **cough** experimental **cough** in the 0.3.1 release has perhaps the most compelling story but may be less useful in practice (at least for now). I have always been fascinated by this idea that applications, when they run into trouble, can go and fix themselves. We often call this self-remediation. In reality, that has always been a pipe dream but there is some really cool infrastructure in the form of Mesos Frameworks that make this idea a possibility.

It's not going to happen

So this second feature comes from my days as both a storage and backup user… where I get the dreaded storage array is full notification. This typically entails getting another expander shelf for your storage array (if you are lucky enough to have expansion capability), populate disks in the expansion bay, and then configure the array to accept this new raw capacity. In the age of Clouds and DevOps, anything is possible and provisioning new resource is only as far as an API call away.

Anything is possible

The idea is that as our ScaleIO storage pool starts to approach full, we can provision more raw disks in the form of EBS volumes to contribute to the storage pool. Since we exist in the cloud or in this case AWS that is only an API call away. That is exactly idea behind this feature… to live in a world where applications can self-remediate and fix themselves. Sounds cool yea?!?! If you are interested in more information about this feature, I urge you to check out the user guide, try it out, provide input and feedback! And if you happen to be at SCaLE 15x this week, I will be doing this exact demo live! BONUS: You can watch that video demo that was performed at SCaLE here:

Where to go next…

So I hope the FaultSet functionality is just around the corner along with the support for CoreOS, or what they are now calling Container Linux, since a lot of the stuff coming out of Mesos and DC/OS is now based on that platform. Let us know if you want more surrounding Mesos and the ScaleIO Framework by hitting me up in our {code} community slack channel #mesos. Additionally, if you are in the Los Angeles area this week, I would highly recommend stopping by SCaLE 15x in Pasadena, catch some of the sessions, and stop by the {code} booth in the expo hall to continue the conversation.


Oct Recap: It’s Been a Trip

I have recently been doing a lot of travel for work. I know that for some, this amount of travel is routine, but for myself, it has been pretty awesome. In the month of October, I have visited 2 places I have never been to before. But before I start with the picture show and going into the cool thing that I saw, I think I need to start out with how all this happened.

The adventure first started out with an acceptance of a speaking proposal at ContainerCon EU 2016 in Berlin. Yes, you read that right! I was pretty dang stoked to be selected to speak at the event and also to help lead the Apache Mesos Hands-on Lab at the Open Source Storage Summit. The experience was unbelievable and was also a big validation on some of the things that I have been working on; namely a ScaleIO Framework for Apache Mesos which just so happened to be announced at the conference. I was definitely a little nervous about the date/time of my speaking session, last day and last time slot, but the turn out was still fantastic! For those that are interested, my talk focused around Software Defined Storage and Container Schedulers. You can view my slide deck on GitHub here.

CC Berlin

Now on to the fun stuff! Before heading to the conference, a few of my coworkers and I stopped in Munich. I got to check out a bunch of stuff ranging from museums talking about World War 2 to sampling local cuisine (as in eating a lot of freaking sausages). Nothing was cooler though than going to the original Oktoberfest in a legit beer tent. The place was freaking amazing and although I am not a beer fan, I partook in drinking and had a blast.

Interesting side note, Oktoberfest isn’t technically an event… it’s actually a place. Mind Blown! When we picked up our tickets, we asked the concierge “How can we get to Oktoberfest?” They looked at us like we were complete mouth-breathers and smiled. They explained and pointed out where Oktoberfest actually was on the map, explained significance of the area where Oktoberfest takes place, and just told us we could get into any cab and say “Take us to Oktoberfest!”. We found that super funny… a little too much, but it was epic!

Oktoberfest in Munich

We then all took a train from Munich to Berlin, did the conference thing, and then hopped back home. We kicked back for less than a week and flew to Hawaii for work. Yes, that isn’t joke. And also not a joke, I have never been to Hawaii even though I live in the Los Angeles area which is just a small jump over the ocean. Our team, {code} by Dell EMC, was actually sponsoring the World Drone Racing Championships in Hawaii. Super cool! In all seriousness though, drones are emerging into a multi-multi-billion (yes, with a B) dollar industry with commercial applications like performing land surveys and the amount of data being collected by these drones is growing exponentially which is where Dell EMC comes in. Putting the business stuff aside for a second, after having tried it first hand using my Inductrix beginner drone with a cam and the Marvel Vision FPV Googles, flying drones is pretty cool!


What’s up next? I will be continuing the work that I have been doing on the ScaleIO Framework and will undoubtedly have additional surprises in store hopefully fairly soon! (With a blog post to accompany it!) If you happened to miss the launch of the ScaleIO Framework, you can find more information including a demo on the official {code} by Dell EMC blog.


Getting Ready for EMC World 2016

We are getting closer and closer to EMC World 2016. I have to admit, its approaching crazy fast. This will be my first time attending EMC World. Seems odd saying that as I have attended many conferences in my career, but never the one my company throws. This time its going to be a different conference going experience as I will be presenting two sessions in the “Code and Modern Operations” track this year. I am very excited for this opportunity to talk about things that are interesting to me and I hope that are of interest to others out there in the open source community.

Apache Mesos

The first session is Introduction To Mesos & Mesosphere. This is basically an Apache Mesos 101 type session with a focus on the company, Mesosphere, whom pushes the direction of Mesos. I will be co-presenting this session with Somik Behera from Mesosphere. For those that haven’t heard about Mesos or looking to learn more about it, this is a excellent session outlining why Mesos is among the best workload schedulers in the datacenter and why its the preferred choice among companies looking to scale their applications. You can catch us both Monday morning (May 2) at 8:30am. Yes, you read that right. Going to be difficult for people to drag themselves out of bed that early in the morning… this is Las Vegas after all. Hope you all can make it!

Deep Dive

My second session is Deep Dive With Mesos & Persistent Storage For Applications. After getting some time to digest the information in my previous Mesos 101 session, this will dive into some of the internals of Mesos as we explore Mesos Frameworks and 2 layer scheduling. We will discuss what 2 layer scheduling means and how external storage can enhance the story around applications leveraging Frameworks. For the architects, operators, and consumers of Mesos, this session is packed with things you need to know in order to make your applications function efficiently and be highly available in order to avoid the train wreck. Ultimately the goal of talk is to enable you to put things on autopilot so you don’t need to manage the application.

Train Wreck

If you haven’t purchases tickets for EMC World, I would highly recommend you do as soon as possible. The EMC {code} team has a huge presence this year as we have our own booth along with our own session track, Code and Modern Operations, which I eluded to earlier. The {code} team collectively has 21 sessions at the conference this year talking about everything from Docker, managing larger open source communities, Mesos, and contributing to open source just to name a few. I will have a follow up blog post just before EMC World highlighting some of the other sessions you might want to check out. Catch you all later!


mesos-module-dvdi Installation Walkthrough

As other people in the EMC {code} team started getting involved with Apache Mesos, I have been helping by fielding questions and troubleshooting installations of both Mesos and mesos-module-dvdi so that others can kick the tires and become subject matter experts. It was brought to my attention that a good clean walkthrough of adding external volume/storage support to an existing production Mesos cluster might be of value to others which brings us to this blog post. We will cover a couple examples of launching Marathon tasks to make use of our new found capability. Also, we will take a look at functionality that the mesos-module-dvdi brings to the table and talk about what those features mean to you.

Preflight Checklist

Before we begin, I want to make sure that everyone who is going to follow along has a properly configured Mesos cluster and for those that don’t have a Mesos cluster, there is a very good walkthrough guide by DigitalOcean that I would highly recommend. I would encourage performing every step in this guide and not skip or take short cuts when deploying your cluster as it can lead to wonky behavior of your cluster.

Wonky Mesos Clusters

There are a number of open source projects out there that attempt to deploy a proper Mesos cluster, but each one that I have evaluated (there have been a lot) has always fallen short somewhere. – everpeace/vagrant-mesos currently only works for version 0.23.1 and older. There haven’t been updates to the project in some time. – mesosphere/playa-mesos runs on virtualbox, but I found a number of issues with how the nodes are configured and ran into issues using certain frameworks, such as the Elastic Search framework, because of virtualbox’s networking layer. The configuration that is deployed is only a single master node; so no high availability. This project also hasn’t been updated in some time. – minimesos project deploys a configuration that is backed by docker containers so if you need to modify something like the slave nodes (in our case, we need to) this isn’t going to be a good option. Like the previous method, single master and no high availability.

If you want kick the tires on all the functionality that Mesos has to offer, I would still recommend that you deploy these nodes by hand. If that is too time consuming or you don’t have access to AWS or hardware resources, playa-mesos gets you most of the way there. The {code} team has a fork of that project which has some bug fixes as well as some enhancements. You can find our fork on GitHub located at emccode/vagrant/playa-mesos and if you chose to go this route, the {code} fork of playa-mesos has mesos-module-dvdi already pre-configured using VirtualBox as the external storage provider in which case you can use this walkthough to verify your settings. Just a reminder, the purpose of this walkthrough is adding external persistent volumes for production Mesos environments in which you will most likely take the steps below and incorporate them into a DevOps tool that your organization uses.

mesos-module-dvdi Installation Walkthrough

mesos-module-dvdi builds on top of functionality that is provided by REX-Ray and DVDCLI. So the first thing we need to do is install both packages on all your Mesos slave/agent nodes (NOTE: that all steps outlined in this section needs to be done on all your slave/agent nodes as root). You can do that by running the following commands:

curl -sSL https://dl.bintray.com/emccode/rexray/install | sh -
curl -sSL https://dl.bintray.com/emccode/dvdcli/install | sh -

Then you need you configure REX-Ray with the storage provider you plan on using. Since I usually host my configurations on Amazon for demonstration purposes, I am going to configure REX-Ray to use EC2 and place the file at /etc/rexray/config.yml. This is what my configuration file looks like:

  - ec2
      preempt: true
      ignoreUsedCount: true
  accessKey: <YOUR_ACCESS_KEY>
  secretKey: <YOUR_SECRET_KEY>

You can do a simple test to make sure REX-Ray and DVDCLI are functioning properly by running the following command: rexray volume. If you need help on configuring REX-Ray for other storage platforms, you can view the REX-Ray configuration guide.

Next, we need to install the mesos-module-dvdi which gives us the ability to provision, mount, unmount external volumes to Mesos tasks. You can download the version that matches your Mesos configuration from our GitHub releases page. My Mesos cluster is running version 0.27.0, so I am going to download libmesos_dvdi_isolator-0.27.0.so and place the .so in the /usr/lib directory. We also need to create a Mesos isolator configuration file so that the Mesos slave/agent knows how to load the isolator module. Create the file /usr/lib/dvdi-mod.json with the following content inside (replace with your version of the downloaded isolator):

   "libraries": [
       "file": "/usr/lib/libmesos_dvdi_isolator-0.27.0.so",
       "modules": [
           "name": "com_emccode_mesos_DockerVolumeDriverIsolator",
           "parameters": [
              "key": "isolator_command",
              "value": "/emc/dvdi_isolator"

Finally, we just need to add the hooks for the Mesos slave/agent to pick up the configuration file and load the isolator. You can do that by running the following shell commands:

echo "docker,mesos" | tee /etc/mesos-slave/containerizers
echo "com_emccode_mesos_DockerVolumeDriverIsolator" | tee /etc/mesos-slave/isolation
echo "file:///usr/lib/dvdi-mod.json" | tee /etc/mesos-slave/modules
service mesos-slave restart

Success! This configuration will also allow you to run external volumes for docker workloads which is why you see docker in the first command. To learn more about using external storage with the docker containerizer in Apache Mesos, please checkout this great article on the EMC {code} blog.

Very Nice Great Success

Exercising the mesos-module-dvdi

Now that we have all this configured correctly, lets start out by creating a couple of external volumes by launching a Mesos task. Create a file called basic.json with the following content inside:

  "id": "hello-mesos",
  "cmd": "while [ true ] ; do touch /var/lib/rexray/volumes/firstvolume/hello1 ; sleep 5 ; done",
  "mem": 16,
  "cpus": 0.1,
  "instances": 1,
  "env": {
    "DVDI_VOLUME_NAME": "firstvolume",
    "DVDI_VOLUME_OPTS": "size=5,iops=150,volumetype=io1,newfstype=xfs,overwritefs=false",
    "DVDI_VOLUME_DRIVER": "rexray",
    "DVDI_VOLUME_NAME1": "secondvolume",
    "DVDI_VOLUME_DRIVER1": "rexray",
    "DVDI_VOLUME_OPTS1": "size=6,volumetype=gp2,newfstype=ext4,overwritefs=false"

Then we can launch the Marathon task by running the following shell command:

curl -k -XPOST -d @basic.json -H "Content-Type: application/json" :8080/v2/apps

This simply creates two new volumes called “firstvolume” and “secondvolume” and uses the default REX-Ray mount location /var/librexray/volumes to mount these volumes.

For a more complex example, lets stand up a simple web server with an external volume and scribble some data on it. Create a file called web.json with the following content inside and replace YOUR_NAME_HERE with your first name (be careful and try not to remove the single quote trailing it):

  "id": "webserver",
  "uris": [
  "cmd": "echo 'Hello, YOUR_NAME_HERE' $(cat /var/lib/rexray/volumes/webdata/readme.txt | wc -l) >> /var/lib/rexray/volumes/webdata/readme.txt && chmod u+x statichttpserver &&  ./statichttpserver -port=$PORT -path=/var/lib/rexray/volumes/webdata",
  "mem": 16,
  "cpus": 0.1,
  "instances": 1,
  "constraints": [
    ["hostname", "UNIQUE"]
  "env": {
    "DVDI_VOLUME_NAME": "webdata",
    "DVDI_VOLUME_OPTS": "size=5,iops=150,volumetype=io1,newfstype=xfs,overwritefs=false",
    "DVDI_VOLUME_DRIVER": "rexray"

Then we can launch the Marathon task by running the following shell command:

curl -k -XPOST -d @web.json -H "Content-Type: application/json" :8080/v2/apps

You can find out what slave/agent node and port the web server is running on by clicking on the running task in the Marathon UI and looking at the application configuration information on the page (as seen below).

Marathon Task

And if you open up that hostname and port in your web browser (you may have noticed that my FQDN name isn’t the same since I configured my EC2 instances without elastic IPs), you will have accessed the webserver we downloaded from my GitHub account and you should see a file readme.txt at the root. When you click that readme.txt to display the contents, you should see your name inside the text file.

Opening up the readme.txt

Although the second example is a fairly meaningful real world example of using external volumes/storage, the real benefits don’t become apparent until you start doing things like failing the Mesos slave/agent node that is currently running the task. We can simulate this failure, by simply stopping the Marathon task and relaunching it. The new instance of our task may start up on a different node, but the important take away is that the external volume from the first instance will be reattached to the new task thus preserving any prior data. If you open the readme.txt after the task enters the running state, you should see the previous “Hello” line with a “0” at end now followed by a “Hello” line ending in “1” that this new instance has added on. Now just imagine a database like PostgreSQL or an Elastic Search node as a Marathon task. Any failure in the Mesos slave/agent or health check will trigger the task to start up from its previous state on a new node. Say “Hello” to high availability!


What’s Next…

Very cool! Hopefully this served as a good guide to adding external volume/storage support to your existing Apache Mesos cluster using REX-Ray, DVDCLI, and mesos-module-dvdi. I have a couple of ideas on some follow up blog posts… thinking about how I do dev/test for mesos-module-dvdi with Docker containers or perhaps more about frameworks. If you have any topics you might want me to discuss, please drop me a line on twitter at @dvonthenen.


Enabling External Storage on Mesos Frameworks

There has been a huge push to take containers to the next level by twisting them to do much more. So much more in fact that many are starting to use them in ways that were never originally intended and even going against the founding principles of containers. The most noteworthy principle of containers being left on the designing room floor is without a doubt is being “stateless”. It is pretty evident that this trend is only accelerating… just doing a simple search of popular traditional databases in Docker Hub yields results like MySQL, MariaDB, Postgres, and OracleLinux in Docker Hub (Oracle suggests you might try running an Oracle instance in a Docker container. LAF!). Then there is all the NoSQL implementations like Elastic Search, Cassandra, MongoDB, and CouchBase just to name a few. We are going to take a look to see how we can bring these workloads into the next evolution of stateful containers using the Mesos Elastic Search Framework as a proposed model.

After Though

The problem with stateful containers today is that pretty much every implementation of a container whether its Docker, Apache Mesos, or etc has been architected with those original principles, such as being stateless, in mind. When the container goes away, anything associated with the container is gone. This obviously makes it difficult to maintain configuration or keep long term data around. Why make this tradeoff to begin with? It keeps the design simple. The problem is useful applications all have state somewhere. As a response, container implementations enabled the ability to store data on the local disks of compute nodes; thus tying workloads to a single node. However on the failure of a particular node, you could potentially lose all data on the direct attached storage. Not good for high availability, but at least there was some answer to this need.

Enter the Elastic Search Mesos Framework

This brings me to a recently submitted Pull Request to the Elastic Search (ES) Mesos Framework project on GitHub to add support for External Storage orchestration, but more importantly to enable management of those external storage resources among Mesos slave/agent nodes. Before I jump into talking about the ES Framework, I probably should quickly talk about Mesos Frameworks in general. A Mesos Framework is a way to specialize a particular workload or application. This specialization can come in the form of tuning the application to best utilize hardware, like a GPU for heavy graphics processing, on a given slave node or even distributing tasks that are scaled out to place them in different racks within a datacenter for high availability. A Framework is consists of 2 components a Scheduler and an Executor. When an resource offer is passed along to a Scheduler, the Scheduler can evaluate the offers, apply its unique view or knowledge of its particular workload, and deploy specialized tasks or applications in the form of Executors to slave/agent nodes (seen below).

Mesos Architecture - Picture Thanks to DigitalOcean

The ES Framework behaves in the same way described above. The design of the ES Scheduler and Executor have been done in such a way that both components have been implemented in Docker containers. The ES Scheduler is deployed to Marathon via Docker and by default the Scheduler will create 3 Elastic Search nodes based on a special list of criteria to meet. If the offer meets that criteria, an Elastic Search Executor in the form of a Docker container will be created on the Mesos slave/agent node representing the output for that offer. Within that Executor image holds the task which in this case is an Elastic Search node.

Deep Dive: How the External Storage Works

Let’s do a deep dive on the Pull Request and discuss why I made some of the decisions that I did. I first broke apart the OfferStrategy.java into a base class containing everything common to a “normal” Elastic Search strategy versus one that will make use of external storage. Then the OfferStrategyNormal.java retains the original functionality and behavior of the ES Scheduler which is on by default. Then I created the OfferStrategyExternalStorage.java which removes all checks for storage requirements. Since the storage used in this mode is all managed externally, the Scheduler does not need to take storage requirements into account when it looks at the criteria for deployment.

The critical piece in assigning External Volumes to Elastic Search nodes is to be able to uniquely associate a set of Volumes containing configuration and data of each elastic search node represented by /tmp/config and /data. That means we need to create, at minimum, runtime unique IDs. What do I mean runtime unique? It means that if there exists a ES node with an identifier of ID2, there exists no other node with an ID2. If an ID is freed lets say ID2 from a Mesos slave/agent node failure, we make every attempt to reuse that ID2. This identifier is defined as a task environment variable as seen in ExecutorEnvironmentalVariables.java.

addToList(ELASTICSEARCH_NODE_ID, Long.toString(lNodeId));
LOGGER.debug("Elastic Node ID: " + lNodeId);

private void addToList(String key, String value) {
  envList.add(getEnvProto(key, value));

Why an environment variable? Because when the task and therefore the Executor is lost, the reference to the ES Node ID is freed so that when a new ES Node is created, it will replace the failed node and the ES Node ID will be recycled. How do we determine what Node ID we should be using when selecting a new or recycling a Node ID? We do this using the following function in ClusterState.java:

public long getElasticNodeId() {
    List taskList = getTaskList();

    //create a bitmask of all the node ids currently being used
    long bitmask = 0;
    for (TaskInfo info : taskList) {
        LOGGER.debug("getElasticNodeId - Task:");
        for (Variable var : info.getExecutor().getCommand().getEnvironment().getVariablesList()) {
            if (var.getName().equalsIgnoreCase(ExecutorEnvironmentalVariables.ELASTICSEARCH_NODE_ID)) {
                bitmask |= 1 << Integer.parseInt(var.getValue()) - 1;
    LOGGER.debug("Bitmask: " + bitmask);

    //the find out which node ids are not being used
    long lNodeId = 0;
    for (int i = 0; i < 31; i++) {
        if ((bitmask & (1 << i)) == 0) {
            lNodeId = i + 1;
            LOGGER.debug("Found Free: " + lNodeId);

    return lNodeId;

We get the current running task list, find out which tasks have the environment variable set, build a bit mask, then walk the bitmask starting from the least significant bit until we have a free ID. Fairly simple. As someone who doesn’t run Elastic Search in production, it was pointed out to me this would only support 32 nodes so there is a future commit that will be done to make this generic for an unlimited number of nodes.

Let’s Do This Thing

To run this, you need to have a Mesos configuration running 0.25.0 (version supported by the ES Framework) with at least 3 slave/agent nodes, you need to have your Docker Volume Driver installed, like REX-Ray for example, you need to pre-create the volumes you plan on using based on the parameter --frameworkName (default: elasticsearch) appended with the node id and config/data (example: elasticsearch1config and elasticsearch1data), and then start the ES Scheduler with the command line parameter --externalVolumeDriver=rexray or what ever volume driver you happen to be using. You are all set! Pretty easy huh? Interested in seeing more? You can find a demo on YouTube located below.

BONUS! The Elastic Search Framework has a facility (although only recommended for very advanced users) for using the Elastic Search JAR directly on the Mesos slave/agent node and in that case, code was also added in this Pull Request to use the mesos-module-dvdi, which is a Mesos Isolator, to create and mount your external volumes. You just need to install mesos-module-dvdi and DVDCLI.

The good news is that the Pull Request has been accepted and it is currently slated for the 8.0 release of Elastic Search Framework. The bad news is the next release looks like to be version 7.2. So you are going to have to wait a little longer before you get an official release with this External Volume support. HOWEVER if you are interested in test driving the functionality, I have the Elastic Search Docker Images used for the YouTube video up on my Docker Hub page. If you want to kick the tires first hand, you can visit https://hub.docker.com/r/dvonthenen/ for images and instructions on how to get up and running. The both the Scheduler image (and the Executor image) were auto created as a result of the gradle build done for the demo.


What’s up next? This was a good exercise in adding on to an existing Mesos Scheduler and Executor and the {code} team may potentially have a Framework of our own on the way. Stay tuned!

Drops the Mic