Tag Archives: SCaLE

Applications that Fix Themselves

I know that in my last blog post I said I would be talking (and probably announcing) the FaultSet functionality planned for the next release of the ScaleIO Framework. As all things in the world of technology and software, things don’t always go as planned. So today I am here to talk about some stuff relating to the Framework that will be in my speaking session entitled How Container Schedulers and Software Defined Storage will Change the Cloud at SCaLE 15x this Saturday March 4th at 3pm in Ballroom F of the Pasadena Convention Center.

SCaLE 15x Logo

This new functionality at face value seems straight forward but the implications start to open the doors to next level thought kinda stuff. Ok ok ok. I may have oversold that a little, but the idea itself is still pretty cool and I am super excited to talk about here.

Just make it happen. I don’t care how!

Just this week, I released the ScaleIO Framework version 0.3.1 which has a functionality preview **cough** experimental **cough** for a couple of features that I think is cool. The first feature, although not as interesting, will probably be the most useful immediately to people that want use ScaleIO but was turned off from the installation instructions… starting from a bare Mesos cluster, you can provision the entire ScaleIO storage platform in an highly available 3-node configuration from scratch and have all the storage integrations, like REX-Ray and mesos-module-dvdi, installed automatically.

Easy Street

In case you missed it… without having to know anything about ScaleIO, you can deploy an entirely software-based storage platform that will give your Mesos workloads the ability to persist application data seamlessly, that is globally accessible, and make your apps highly available. This abstracts the complexities of the entire storage platform and transforms it into a simple service where you can simply consume storage from. As far as any user is concerned, the storage platform natively came with Mesos and the first app you deploy can consume ScaleIO volumes from day one. If you want more details on how to make that happen, please check out the documentation.

The Sky is Falling!! Do Something?!?!

I think the second functionality preview **cough** experimental **cough** in the 0.3.1 release has perhaps the most compelling story but may be less useful in practice (at least for now). I have always been fascinated by this idea that applications, when they run into trouble, can go and fix themselves. We often call this self-remediation. In reality, that has always been a pipe dream but there is some really cool infrastructure in the form of Mesos Frameworks that make this idea a possibility.

It's not going to happen

So this second feature comes from my days as both a storage and backup user… where I get the dreaded storage array is full notification. This typically entails getting another expander shelf for your storage array (if you are lucky enough to have expansion capability), populate disks in the expansion bay, and then configure the array to accept this new raw capacity. In the age of Clouds and DevOps, anything is possible and provisioning new resource is only as far as an API call away.

Anything is possible

The idea is that as our ScaleIO storage pool starts to approach full, we can provision more raw disks in the form of EBS volumes to contribute to the storage pool. Since we exist in the cloud or in this case AWS that is only an API call away. That is exactly idea behind this feature… to live in a world where applications can self-remediate and fix themselves. Sounds cool yea?!?! If you are interested in more information about this feature, I urge you to check out the user guide, try it out, provide input and feedback! And if you happen to be at SCaLE 15x this week, I will be doing this exact demo live! BONUS: You can watch that video demo that was performed at SCaLE here:

Where to go next…

So I hope the FaultSet functionality is just around the corner along with the support for CoreOS, or what they are now calling Container Linux, since a lot of the stuff coming out of Mesos and DC/OS is now based on that platform. Let us know if you want more surrounding Mesos and the ScaleIO Framework by hitting me up in our {code} community slack channel #mesos. Additionally, if you are in the Los Angeles area this week, I would highly recommend stopping by SCaLE 15x in Pasadena, catch some of the sessions, and stop by the {code} booth in the expo hall to continue the conversation.

Looking Back at SCaLE x14

For those that don’t know what SCaLE is… SCaLE stands for the Southern California Linux Expo and this marked the 14th year the conference has been held which happened to be in Pasadena, CA on Jan 21-24. This was the first time I have attended SCaLE and I have to say that it was quite refreshing going to a conference where the primary purpose wasn’t trying to sell you product but rather ideas and open source projects of interest. A lot of this is due to a very community driven focus which encompassed everything from session selection, a large volunteer staff, and etc.

There were a number of sessions and tracks ranging from Ubucon (everything Ubuntu), PostgreSQL, MySQL, Apache Bigtop, Open Source in Education, Unikernels, Robotics using Golang (pictured below) and etc. I took it upon myself to take a slice out of each pie to get a good feel for what the conference had to offer. Like everything in life, there was a range of the good, the bad and the ugly… but I would have to say there wasn’t much, if any, in the latter category. For reference, you can get a copy of the conference schedule here as a sampling of what types of sessions SCaLE provides.

Gobot - Robots Powered by Golang

The other significant difference is that audience or the make up of the attendees in this conference is significantly different than traditional conferences backed by big money which in this instance was predominately developers, DevOps peeps, and sysadmins. Gone are the sales and marketing people with the heavy focus on landing deals with private rooms off to the side where contracts are being drawn up. If you are looking to network with other developers and the actual users of a particular technology, this is a fantastic place to connect with those people.

Without further ado, here are some notes from sessions I found interesting. It should also be noted that I have included a lot of the session links in this blog as many of the slide decks are available on those pages.

IoT and Snappy Ubuntu Core

This session was titled Internet of Things gets ‘snappy’ with Ubuntu Core and its main focus was introducing people to the idea of Snappy Ubuntu Core which Ubuntu is pushing as the solution for building applications in the cloud and on embedded devices. Think of it as a minimal Ubuntu operating system built using the same libraries as traditional Ubuntu just with a much smaller footprint wrapped with a convenient package management that can orchestrate installation and upgrades. The IoT demo was an application built using snappy on a raspberry pi with an IP camera that can do facial tracking. Pretty cool! You can find more information about Snappy Ubuntu Core here.

IoT using Snappy Ubuntu Core on a Raspberry Pi

Juju Charms

This was my first exposure to Juju Charms. I had heard the name being thrown around before but didn’t have the time to take a look at the offering. The session Writing your first Juju Charm was a well presented introduction for first time users. The main takeaway is that Juju models relationships with other applications encapsulated in what is called a charm and that those charms, for example MySQL, are created, maintained, and tuned by subject experts. Pulling those vetted charms helps provide a solid foundation to build your applications on. Dependencies in charms are automatically pulled in and the likeness was compared that to the layers of an onion; you just layer in the applications you want. Then when pulling together your solution, its simply dragging and connecting these charms within the UI.

In writing your own charm, the method of hooking these applications together is done by an event driven engine. For example, your application contained within your own charm doesn’t get installed until the http server started and the MariaDB started event are received. Overall a good session. You can find more information about Juju here and here.

Docker and Unikernels

Although what I am going to write about here wasn’t a session at SCaLE per se, but speakers from both the Docker and Unikernel presentations at this special edition of the Docker LA Meetup SCaLE x14 Edition had sessions at the conference. The first speaker was Jérôme Petazzoni from Docker (pictured on the right). We discussed Swarm and the advances made in the latest release specifically around mulit-network orchestration between containers instances to satisfy the cluster use case. I did enjoy the fact that the discussion was almost completely driven by demonstration, but from the sounds of known issues even walking into this demo, it makes me think that this isn’t ready for prime time yet. I definitely appreciate the candor and honesty from the presenter about these issue; speaks to his professionalism.

The second half of the meetup quickly switched gears to talk about the Docker acquisition of Unikernel Systems who is the primary driver of the open source ecosystem of unikernel.org. Presenting a 101 type course on unikernels was Amir Chaudhry and Richard Mortier (pictured on the left). The session was quite fascinating considering I had not heard of the term unikernel until Docker made the announcement that morning. In a nutshell, the idea of a unikernel is such that you only take the various parts of a particular stack, say networking, that you need in order for your application to run. This implies that the stack is modular and has clear separation of concerns. The claim is that this type of operating system has:

  • a smaller footprint in terms of size
  • better security because they have a lower penetration profile
  • better performance because unnecessary services aren’t running
  • boots quicker than a traditional operating system which lends to cloud applications

While all this might be true, I have some reservations about dynamically pulling pieces of the operating system to build your final application image. Unikernel System via their tools claim to have solved that dependency nightmare. For the sake of argument if we accept that have been able to solve dynamic dependencies between modules, a lot of these operating systems have been rewritten to the ground up which begs the questions about stability. Additionally, these unikernels don’t have a traditional kernel or protected layer meaning that the application has full access all the way down the stack. Think about the security ramifications of this design.

It seems like unikernels are at odds with what Ubuntu Core decided to do which was just create a common minimal operating system using the same exact libraries we have been using for decades to run cloud and embedded applications. I think I prefer the Ubuntu Core design, but I have a feeling the hype and backing of Docker’s marketing machine might smash that idea.

Jérôme Petazzoni of Docker, Amir Chaudhry and Richard Mortier of Unikernel Systems

Mentoring in Open Source

I attended the Dent the Universe: Mentoring in Open Source session after a fellow coworker was interested in but was unable to attend SCaLE. I was pleasantly surprised with both the content of the discussion as well as the make up of the audience. The audience had a mix of gender, race, backgrounds, and also age… as in there were teens, toddlers, and babies in the audience (pretty sure the babies didn’t really follow was was going on).

The upshot of the session is that relationships between mentors and mentees bring forth a sense of community like what we see in the open source world. Its a platform for exchanging ideas similar to a meetup but on an individual 1 on 1 setting. There are many open source projects and organizations out there that have mentoring programs included by not limited to: Google, Apache, Fedora, and Ubuntu. The session covered what mentors are and aren’t, how you build that relationship, the dedication that mentors need to have, and even how mentees might go about getting a mentor. Of all the sessions that I did sit in on, I have to say I took the most amount of notes in this one… maybe it was because the subject matter was so different, but nonetheless this was definitely an excellent session and was well worth the time.

Nomad, a Golang Cluster Scheduler

I just want to give a quick mention about the Nomad – An introduction to Cluster Schedulers session. I found the overall design of Nomad intriguing based on the claimed feature set. Nomad is a HashiCorp project which if you don’t know who HashiCorp is, they are the ones who brought you projects like Vagrant and Consul. Nomad is written in Golang and supports scheduling for virtual infrastructure, Docker, Java applications, and etc. They have support for Global and Local cluster regions (think multi-datacenter support) and has facilities for deploying, scaling, rolling updates, load balancing, integrates with Consul and more. Although it seems like scale isn’t their strong suit as the presenter only believes the stress testing has only been done on 100 or so nodes; however, Nomad might be a good enough solution for a mid-sized business. I will be keeping on eye on this considering the success that HashiCorp has had on their other projects.

Expo Floor at SCaLE x14

Conclusion

Overall, SCaLE was a great conference with veritable buffet of topics that can interest a wide variety of people who attend. Hell for the price, under $40 if you get in on a coupon code, you can’t beat the value for the level of information in the sessions and the discussions had with the many attendees and speakers. I would definitely recommend checking out this conference next year and if you can convince your employer to fund the trip or if you can’t and have the spare time and (not much) cash to come on your own dime, its definitely a worth while experience.