Author Archives: David vonThenen

Pushing the Boundaries: My Experience at the Real Time Communication Conference at Illinois Institute of Technology

This was my first time at the RTC Conference at Illinois Institute of Technology in Chicago, and I was blown away by the caliber of talks at the event. They ranged from very academic and theoretical to extremely practical, capturing what is happening on the ground today in real-time communications. The mix of attendees was one of the most diverse I have ever seen, which included seasoned professionals in their respective fields to bright and inquisitive minds from IIT starting their careers in the workforce. I was fortunate to have some fantastic conversations while I was here, which I will get to at the end of this post, but this was a mind-opening experience that I am grateful to have received.

RTC Conference at Illinois Institute of Technology

My journey to Chicago included presenting two sessions at the conference. The first was titled “Enhancing Real-Time WebRTC Conversation Understanding Using ChatGPT” and the second was “Edge Devices as Interactive Personal Assistants: Unleashing the Power of Generative AI Agents”. The talks each had very different goals in what they were trying to achieve. Based on the feedback and number of questions that I got afterward in the hallway, they were very well received. Attendees got a glimpse into some unique and thought-provoking possibilities they could take home to explore.

Enhancing Real-Time WebRTC Conversation Understanding Using ChatGPT

The upshot of this session was using Generative AI and Large Language Models (LLMs) to influence conversations in real time. The backdrop was using WebRTC as a protocol and platform to host our conversation, but in reality, any medium conducive to carrying a conversation would suffice. However, WebRTC provides an ideal environment as it’s an open standard, and every modern browser has the capability of supporting these voice/video communications.

Large Language Models and Conversation AIs, like ChatGPT, for the first time, have enabled us to influence the conversation on these platforms because they seamlessly participate in conversations and provide relevant contributions to the conversations being had. That’s the key to why this is a “thing” today. These AIs can be proactive in conversation and not just react to them.

If you are interested in learning more and seeing a really cool demo showcasing this in action, look at the recording above. The demo was definitely a crowd-pleaser since we got to highlight two powerful concepts: AI can retain the history of the conversation taking place and meaningfully participate to influence the simulated conversation for the demo.

All of these resources, links to articles mentioned, and open source projects used in this presentation can be found in the slides. There are also instructions on how to reproduce the demo within this talk.

Edge Devices as Interactive Personal Assistants: Unleashing the Power of Generative AI Agents

My second session focused on Autonomous AI Agents. This might be an unfamiliar topic to some, but we have all heard about them interacting in the real world. Unlike Siri and Alexa, which focus on a single transactional question and response, these are processes where AI models can create their own sub-tasks for problems that need more refinement or detail that a single answer might not be able to answer sufficiently. Without these Autonomous Agents, we typically achieve this refinement by asking the AI ourselves to drill down into a problem further. In this case, the autonomous agent process provides its own questions to seek out the details of the answer.

Since these LLMs are getting smarter and consume fewer resources than previous generations, they can live on IoT (Internet of Things) and Edge devices for the first time thanks to devices getting denser with more hardware capabilities and resources. This talk focuses on different architectures that can be used to land these Autonomous Agents on these IoT/Edge devices to focus on jobs that could run for many minutes to multiple days. The trade-off in these cases is answers with concentrated amounts of knowledge and a more thorough response versus the speed of the reply.

Take a look at the recording of the session above. There is also a demo at the end of this presentation that serves as a proof-of-concept to demonstrate what could be done with these Autonomous AI Agents using an open source project I wrote called Open Virtual Assistant. The demo highlights exercising the “memory” for these Agents (via their vector database) and how one might launch these agents within an IoT or Edge device.

Again all of these resources, links to articles mentioned, and open source projects used in this presentation can be found in the slides with instructions on reproducing the demo within this talk.

Looking Back… Personal Reflections

My most memorable parts of the conference were the conversations with other attendees. The last time I attended a conference was in the second half of 2022, just before ChatGPT blew up in the media. Oh, how times have changed! Besides a good number of the talks focusing on ChatGPT or AI in general, the buzz was definitely in the air and dominated most of the chatter in the hallways. I love tech talk, but I might love these philosophical, what-if, and future-predicting conversations even more.

One of the top questions I get asked frequently as someone who works in the AI/ML is, “Will humans lose our jobs to AI?”. I was also asked this while at the conference. My response is always yes and no. Yes, certain jobs will likely become obsolete… and No, there will always be jobs out there since we haven’t solved all the problems contained within our existence. There will always be jobs, but they might look completely different from what they are today.

An example I always like to give is the calculator and personal computer. When the calculator became mainstream, did all accounting or anything related to math disappear? The answer is definitely No. Society then created far more advanced “calculators” in the form of personal computers that do a bunch of repetitive tasks and naturally transitioned into automation. This is the automation of building cars, canning foods, or mass-producing clothing or shoes. This automation obsoleted people canning foods but created new jobs to develop and maintain these new automation systems.

Everyone recognizes the potential for Artificial Intelligence as the next significant disruptor to humanity. My advice is that just like the calculator or computer, it’s best to understand and know how to use these systems regardless of your field. Those who know how to leverage AI will outshine those who don’t. Finally, when there isn’t a need for a door-to-door encyclopedia salesperson, it’s best to objectively recognize the change in tides and learn something new to transition to. That’s my long-winded answer.

Meeting New Friends

I had a great time at RTC Conference at IIT, and if you are a person like me who loves to learn new and exciting topics, technologies, and ideas, then this is a great place to expand your mind. I highly recommend going and hope I am lucky enough to be working on something compelling enough to share with others next year. Cheers!

Let’s have a Conversation!

Hello! How are you doing? It’s been a while since we last chatted. Since you’re stopping by, you might notice immediately that I revived my blog (obviously). I took a break from it for a good couple of years while COVID did its thing; it made sense since we were locked indoors, unable to attend meetups, conferences, or events. Unfortunately, it seems like I did lose a couple of blog posts leading up to the shutdown, which I thought I had backup, but here we are. It’s all good… It’s just great that I can talk with you again!

Welcome back

What’s new with me, you ask? Well, as I started to get back into the open source world again by attending meetups and conferences, I even had an opportunity to speak at a pre-conference event during KubeCon Europe 2022 about Continuous Integration/Continuous Delivery systems. I had a chance to be a part of an incredible team that created a Kubernetes distribution from scratch and was well received.

But… change was brewing at VMware and I found myself taking a new and exciting opportunity with a great and sharp group of people at Symbl. What is Symbl? Symbl is a rapidly maturing startup focused on the Conversation Intelligence space providing a platform to help other developers integrate conversational context into their applications and services via APIs. If you’re interested in learning more, watch this presentation below by our CEO and co-founder, Surbhi Rathore.

I have enjoyed my time working on application infrastructure, Kubernetes, containers, and everything virtualization, but this is a welcome change for me. Throughout my career, I have always taken on bold, engaging, and areas in technology that represent the unknown to me. This definitely qualifies. Conversation Intelligence is exponentially growing, and we are just scratching the surface in terms of its potential applications and what new features all of us can dream up.

Sky is the limit

If you’re interested in learning more, I will invite you to visit and read through my first blog post at Symbl. The blog is the first in a series that details my perspective as a new developer in the Conversation Intelligence ecosystem who is learning how to bootstrap themselves to become a power user. I aim to become a force multiplier who helps others in their journey to make their application and services conversation aware.

Like what you hear, follow me on Twitter at @dvonthenen to keep up to date on my adventures. I hope to see you out there!

YAKB: Running Kubernetes on Your Laptop

Hi there! Yes, it’s been a while since I have posted to my personal blog. Before I get to my post, I thought I would just bring you up to speed with what’s been going on in my world since my last post. So I no longer work at Dell EMC. I also no longer work at Dell Technologies. After transfer upon transfer, I currently work at VMware. The change has been going good so far especially moving from companies that are traditionally hardware based to a company that is mostly software focused. While I have only been at VMware since March of this year, the momentum in the Kubernetes and CNCF communities and VMware’s commitment to those communities has made VMware an obvious choice going forward. Which leads me to this blog post…

Let’s get to the Blog!

So I am writing this “Yet Another Kubernetes Blog” post because I needed to document how someone can run Kubernetes on their laptop so that fellow future session attendees can follow along with future presentations that I might give. If this never gets used in one of my presentations, that’s cool… but I thought, hey why not just put this out here so that others might benefit from this. You could potentially use this blog post to just test drive Kubernetes and play around with its functionality.

NOTE: If you are running on Windows, well you might want to install something like VMware Workstation Pro so that you can get a RHEL7 VM running on your laptop. I believe you can try it for 30 days.

Installing Virtual Box

To simplify the installation among the two platforms, we are going to need to install Virtual Box (5.2 is recommended). To do that visit the Virtual Box Homepage, then download the installation package based on your platform.

On MAC, just download the DMG file and install Virtual Box like any other application on your MAC.

On RHEL7, download the appropriate RPM and then install using the following command:

sudo rpm -ivh 

Installing kubectl

Next, we need to install kubectl which is the Kubernetes command line tool to manage a Kubernetes cluster. You will be using this utility for the majority (if not all) of the operations to view what’s going on in your cluster to kicking off applications in your cluster. Fortunately, installation of this component is pretty straightforward.

On MAC, you can run the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

On RHEL7, you can run the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

You can verify that kubectl is installed correctly by running the following command: kubectl help. Let’s move onto the last component minikube.

Installing minikube

So minikube is a simple tool that allows you to quickly deploy a single node Kubernetes cluster on your laptop/computer. We are going to be using that for demonstration purposes and to kick the tires on Kubernetes. You can install minikube by doing the following:

On MAC, you can run the following commands:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

minikube config set memory 4096
minikube config set cpus 2

On RHEL7, you can run the following commands:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

minikube config set memory 4096
minikube config set cpus 2

RHEL7 NOTE: If you see the following error (I did not during my install, but it has been reported to happen sometimes):

Could not read CA certificate "/etc/docker/ca.pem": open /etc/docker/ca.pem: no such file or directory

The fix is to update /etc/sysconfig/docker to ensure that minikube’s environment changes are respected:

< DOCKER_CERT_PATH=/etc/docker
---
> if [ -z "${DOCKER_CERT_PATH}" ]; then
>   DOCKER_CERT_PATH=/etc/docker
> fi

You can verify that minikube is installed correctly by running the following command: minikube version. Pretty simple!

I want a Kubernetes!

So now that you have all the associated software installed on your laptop, let’s bring up a Kubernetes cluster!

On MAC and RHEL7, run the following command:

minikube start

To make sure you can access your Kubernetes cluster, run the following command:

kubectl get pods --all-namespaces

To stop and delete your cluster, run the following command:

minikube stop

Conclusion

Well, there it is! A simple way to get a Kubernetes cluster running on your laptop without requiring a public cloud account or a ton of hardware sitting in a lab or datacenter somewhere. This by far isn’t a magic bullet and it only good for running small light-weight applications. If anything, it’s a simple way to familiarize yourself with Kubernetes and general management of the cluster.

I plan on posting to my blog more often now… so stay tuned on some cool future blog posts! If you have any suggestions on topics you want to hear more on, please let me know! I am interested in everything from intro-101-type blogs to deep-dives. Just drop me a line!