Tag Archives: Conferences

Pushing the Boundaries: My Experience at the Real Time Communication Conference at Illinois Institute of Technology

This was my first time at the RTC Conference at Illinois Institute of Technology in Chicago, and I was blown away by the caliber of talks at the event. They ranged from very academic and theoretical to extremely practical, capturing what is happening on the ground today in real-time communications. The mix of attendees was one of the most diverse I have ever seen, which included seasoned professionals in their respective fields to bright and inquisitive minds from IIT starting their careers in the workforce. I was fortunate to have some fantastic conversations while I was here, which I will get to at the end of this post, but this was a mind-opening experience that I am grateful to have received.

RTC Conference at Illinois Institute of Technology

My journey to Chicago included presenting two sessions at the conference. The first was titled “Enhancing Real-Time WebRTC Conversation Understanding Using ChatGPT” and the second was “Edge Devices as Interactive Personal Assistants: Unleashing the Power of Generative AI Agents”. The talks each had very different goals in what they were trying to achieve. Based on the feedback and number of questions that I got afterward in the hallway, they were very well received. Attendees got a glimpse into some unique and thought-provoking possibilities they could take home to explore.

Enhancing Real-Time WebRTC Conversation Understanding Using ChatGPT

The upshot of this session was using Generative AI and Large Language Models (LLMs) to influence conversations in real time. The backdrop was using WebRTC as a protocol and platform to host our conversation, but in reality, any medium conducive to carrying a conversation would suffice. However, WebRTC provides an ideal environment as it’s an open standard, and every modern browser has the capability of supporting these voice/video communications.

Large Language Models and Conversation AIs, like ChatGPT, for the first time, have enabled us to influence the conversation on these platforms because they seamlessly participate in conversations and provide relevant contributions to the conversations being had. That’s the key to why this is a “thing” today. These AIs can be proactive in conversation and not just react to them.

If you are interested in learning more and seeing a really cool demo showcasing this in action, look at the recording above. The demo was definitely a crowd-pleaser since we got to highlight two powerful concepts: AI can retain the history of the conversation taking place and meaningfully participate to influence the simulated conversation for the demo.

All of these resources, links to articles mentioned, and open source projects used in this presentation can be found in the slides. There are also instructions on how to reproduce the demo within this talk.

Edge Devices as Interactive Personal Assistants: Unleashing the Power of Generative AI Agents

My second session focused on Autonomous AI Agents. This might be an unfamiliar topic to some, but we have all heard about them interacting in the real world. Unlike Siri and Alexa, which focus on a single transactional question and response, these are processes where AI models can create their own sub-tasks for problems that need more refinement or detail that a single answer might not be able to answer sufficiently. Without these Autonomous Agents, we typically achieve this refinement by asking the AI ourselves to drill down into a problem further. In this case, the autonomous agent process provides its own questions to seek out the details of the answer.

Since these LLMs are getting smarter and consume fewer resources than previous generations, they can live on IoT (Internet of Things) and Edge devices for the first time thanks to devices getting denser with more hardware capabilities and resources. This talk focuses on different architectures that can be used to land these Autonomous Agents on these IoT/Edge devices to focus on jobs that could run for many minutes to multiple days. The trade-off in these cases is answers with concentrated amounts of knowledge and a more thorough response versus the speed of the reply.

Take a look at the recording of the session above. There is also a demo at the end of this presentation that serves as a proof-of-concept to demonstrate what could be done with these Autonomous AI Agents using an open source project I wrote called Open Virtual Assistant. The demo highlights exercising the “memory” for these Agents (via their vector database) and how one might launch these agents within an IoT or Edge device.

Again all of these resources, links to articles mentioned, and open source projects used in this presentation can be found in the slides with instructions on reproducing the demo within this talk.

Looking Back… Personal Reflections

My most memorable parts of the conference were the conversations with other attendees. The last time I attended a conference was in the second half of 2022, just before ChatGPT blew up in the media. Oh, how times have changed! Besides a good number of the talks focusing on ChatGPT or AI in general, the buzz was definitely in the air and dominated most of the chatter in the hallways. I love tech talk, but I might love these philosophical, what-if, and future-predicting conversations even more.

One of the top questions I get asked frequently as someone who works in the AI/ML is, “Will humans lose our jobs to AI?”. I was also asked this while at the conference. My response is always yes and no. Yes, certain jobs will likely become obsolete… and No, there will always be jobs out there since we haven’t solved all the problems contained within our existence. There will always be jobs, but they might look completely different from what they are today.

An example I always like to give is the calculator and personal computer. When the calculator became mainstream, did all accounting or anything related to math disappear? The answer is definitely No. Society then created far more advanced “calculators” in the form of personal computers that do a bunch of repetitive tasks and naturally transitioned into automation. This is the automation of building cars, canning foods, or mass-producing clothing or shoes. This automation obsoleted people canning foods but created new jobs to develop and maintain these new automation systems.

Everyone recognizes the potential for Artificial Intelligence as the next significant disruptor to humanity. My advice is that just like the calculator or computer, it’s best to understand and know how to use these systems regardless of your field. Those who know how to leverage AI will outshine those who don’t. Finally, when there isn’t a need for a door-to-door encyclopedia salesperson, it’s best to objectively recognize the change in tides and learn something new to transition to. That’s my long-winded answer.

Meeting New Friends

I had a great time at RTC Conference at IIT, and if you are a person like me who loves to learn new and exciting topics, technologies, and ideas, then this is a great place to expand your mind. I highly recommend going and hope I am lucky enough to be working on something compelling enough to share with others next year. Cheers!

Dell EMC World 2017 – Las Vegas, NV

It looks like that time of year again as we are just days away from Dell EMC World 2017. The {code} team will once again be in attendance and presenting some interesting sessions (16 in total), a Hands-On Lab (ran through it myself and it’s great!), and various materials at the show. The buffet (Yes, we are in Vegas after all!) of information we have lined up is pretty dang awesome! You can find more information about the stuff {code} has going on in our official {code} at Dell EMC World page.

Demos, Demos, Demos

What I wanted to talk about today were the two sessions that I will be presenting at Dell EMC World. The first session called Demos Demos Demos! Containers & {code} is happening on Wednesday, May 10 at 1:30 PM in room Zeno 4602. I will be co-presenting with Travis Rhoden and Vladimir Vivien. Just like the title says this session will have a few slides to set up what is going on and talk about who we are… then it’s nothing but live demos. I think this will be a pretty amazing session that captures what is hot in the container and scheduler space but at the same time, will give you some practical and real-world information to take home with you. Definitely, check this out!

ScaleIO Framework

The second session I will be presenting solo. It’s called Managing ScaleIO As Software On Mesos and is occurring on Thursday, May 11 at 11:30 AM in room Zeno 4602. I floated this idea last year during a session at (the then) EMC World 2016 where I thought it would be cool to be able to treat storage just as another piece of software. Well now its one year later and that idea is a reality now and we are going to talk about and demonstrate the ScaleIO Framework in this session. Many other container schedulers have implementations of this pattern and this concept will change the way how we consume software in the future.

Have fun, but not too much fun!

If you are heading down to Dell EMC World this year, stop by the sessions the {code} team will be presenting at and if you have any questions, feel free to linger around after the presentations to chat. I think this is going to be an awesome conference, do check out some of the social networking opportunities available to connect with some new people, and as always enjoy the show and have fun (but not too much… it’s Vegas after all)!

Applications that Fix Themselves

I know that in my last blog post I said I would be talking (and probably announcing) the FaultSet functionality planned for the next release of the ScaleIO Framework. As all things in the world of technology and software, things don’t always go as planned. So today I am here to talk about some stuff relating to the Framework that will be in my speaking session entitled How Container Schedulers and Software Defined Storage will Change the Cloud at SCaLE 15x this Saturday March 4th at 3pm in Ballroom F of the Pasadena Convention Center.

SCaLE 15x Logo

This new functionality at face value seems straight forward but the implications start to open the doors to next level thought kinda stuff. Ok ok ok. I may have oversold that a little, but the idea itself is still pretty cool and I am super excited to talk about here.

Just make it happen. I don’t care how!

Just this week, I released the ScaleIO Framework version 0.3.1 which has a functionality preview **cough** experimental **cough** for a couple of features that I think is cool. The first feature, although not as interesting, will probably be the most useful immediately to people that want use ScaleIO but was turned off from the installation instructions… starting from a bare Mesos cluster, you can provision the entire ScaleIO storage platform in an highly available 3-node configuration from scratch and have all the storage integrations, like REX-Ray and mesos-module-dvdi, installed automatically.

Easy Street

In case you missed it… without having to know anything about ScaleIO, you can deploy an entirely software-based storage platform that will give your Mesos workloads the ability to persist application data seamlessly, that is globally accessible, and make your apps highly available. This abstracts the complexities of the entire storage platform and transforms it into a simple service where you can simply consume storage from. As far as any user is concerned, the storage platform natively came with Mesos and the first app you deploy can consume ScaleIO volumes from day one. If you want more details on how to make that happen, please check out the documentation.

The Sky is Falling!! Do Something?!?!

I think the second functionality preview **cough** experimental **cough** in the 0.3.1 release has perhaps the most compelling story but may be less useful in practice (at least for now). I have always been fascinated by this idea that applications, when they run into trouble, can go and fix themselves. We often call this self-remediation. In reality, that has always been a pipe dream but there is some really cool infrastructure in the form of Mesos Frameworks that make this idea a possibility.

It's not going to happen

So this second feature comes from my days as both a storage and backup user… where I get the dreaded storage array is full notification. This typically entails getting another expander shelf for your storage array (if you are lucky enough to have expansion capability), populate disks in the expansion bay, and then configure the array to accept this new raw capacity. In the age of Clouds and DevOps, anything is possible and provisioning new resource is only as far as an API call away.

Anything is possible

The idea is that as our ScaleIO storage pool starts to approach full, we can provision more raw disks in the form of EBS volumes to contribute to the storage pool. Since we exist in the cloud or in this case AWS that is only an API call away. That is exactly idea behind this feature… to live in a world where applications can self-remediate and fix themselves. Sounds cool yea?!?! If you are interested in more information about this feature, I urge you to check out the user guide, try it out, provide input and feedback! And if you happen to be at SCaLE 15x this week, I will be doing this exact demo live! BONUS: You can watch that video demo that was performed at SCaLE here:

Where to go next…

So I hope the FaultSet functionality is just around the corner along with the support for CoreOS, or what they are now calling Container Linux, since a lot of the stuff coming out of Mesos and DC/OS is now based on that platform. Let us know if you want more surrounding Mesos and the ScaleIO Framework by hitting me up in our {code} community slack channel #mesos. Additionally, if you are in the Los Angeles area this week, I would highly recommend stopping by SCaLE 15x in Pasadena, catch some of the sessions, and stop by the {code} booth in the expo hall to continue the conversation.