The Data Canteen: Episode 18

Rogue Robots, Cyborgs, & Artificial General Intelligence w/ Rob Albritton

 
 
 

Rob Albritton, previous guest-star from episode #5, returns to The Data Canteen! Rob kicks things off by updating us on the metamorphosis of Octo's oLabs from the planning stage (where things were in episode #5), to the fully evolved physical instantiation that debuted just last month, to how he hopes to see it employed for positive impact going forward. Next, we go down a very cool and weird path talking about rogue robots, cyborgs, and artificial general intelligence :-)! Finally, Rob addresses questions submitted by you our listeners!

 
FEATURED GUESTS:

Name: Rob Albritton

Email: srborghese@g.ucla.edu

LinkedIn: https://www.linkedin.com/in/robalbritton/

 
SUPPORT THE DATA CANTEEN (LIKE PBS, WE'RE LISTENER SUPPORTED!):

Donate: https://vetsindatascience.com/support-join

 
EPISODE LINKS:

Octo's oLabs: https://olabs.octo.us/

 
PODCAST INFO:

Host: Ted Hallum

Website: https://vetsindatascience.com/thedatacanteen

Apple Podcasts: https://podcasts.apple.com/us/podcast/the-data-canteen/id1551751086

YouTube: https://www.youtube.com/channel/UCaNx9aLFRy1h9P22hd8ZPyw

Stitcher: https://www.stitcher.com/show/the-data-canteen

 
CONTACT THE DATA CANTEEN:

Voicemail: https://www.speakpipe.com/datacanteen

 
VETERANS IN DATA SCIENCE AND MACHINE LEARNING:

Website: https://vetsindatascience.com/

Join the Community on LinkedIn: https://vetsindatascience.com/support-join

Mentorship Program: https://vetsindatascience.com/mentorship

 
OUTLINE:

00:00:07​ - Introduction

00:01:14 - Metamorphosis of oLabs from the planning stage to the fully evolved physical instantiation

00:19:57 - What makes oLabs special

00:23:39 - How oLabs will be employed for positive impact

00:28:51 - Introduction to the weird path of rogue robots, cyborgs, and AGI

00:29:14 - What to expect in the near to mid-term regarding AI enabled systems on the battlefield

00:40:28 - Cyborgs...should we expect something like Musk's Neuralink...or something else?

00:43:59 - What is a rogue robot, and are they a present or future threat?

00:49:47 - AGI...possible or not? And, if so, when?

00:54:31 - Answers to listener submitted questions

01:10:41 - The "unasked question question"

01:14:07 - Should you invest in learning defensive and/or offensive adversarial ML attack skills?

01:19:49 - The best way to contact Rob

01:20:44 - Farewells

Transcript

DISCLAIMER: This is a direct, machine-generated transcript of the podcast audio and may not be grammatically correct.

[00:00:07] Ted Hallum: Welcome to the data. Canteen, a podcast focused on the care and feeding of data scientists and machine learning engineers who share in the common bottom of this military service. I'm your host, Ted Hallam today. Senior director at AI practice lead Okta. Rob Bri returns for his second episode on the data canteen kicking things off.

Rob covers all that's transpired between episode five. And now back in episode five, Rob laid out his plans for, to build a state-of-the-art artificial intelligence research and development lab. And in the intervening months, he's accomplished just that we'll hear about the challenges he ran up against his lab's unique mix of powerful capabilities and how he envisions using the lab to help federal government clients and humanity in general, through things like cancer research.

Also in this conversation, we cover a range of fascinating topics to include what you should expect to see in the near to midterm regarding AI enabled battlefield systems, cyborgs, and how they will likely factor into the coming era of AI warfare. How concerned you should be about the specter of rogue robot.

Artificial general intelligence. How far out is it or is it simply beyond our reach? And finally Rob addresses questions posed by you, our listeners. I hope you enjoy the conversation as much as I did now. Let's go

Rob, man. Thank you so much for coming back onto the data canteen.

You're the first person that we've ever had as a guest to come back onto the show. And so I think it would be great to just sort of pick up where we left off, because when I talked to you for episode five way back in March of last year, you told us all about O labs, which was the R and D lab. That was mostly aspirational at the time.

Like all the plans were in place. But, um, it hadn't actually started to really roll out in a physical way. Um, but I, you guys just had the grand opening within the last month and now it is officially a thing. So I'd love to hear how everything progressed the metamorphosis from our last conversation to what OLAB is today.

[00:01:59] Rob Albritton: Yeah. Awesome. Thanks, Ted. Um, I am super stoked to be back on the show, so I, I really appreciate you having me back on, uh, I had a blast the first time and, and hopefully this time we can, uh, talk about some interesting topics as well. Um, so jumping right into OLAB, uh, OLAB is short for Okta labs. Um, it's our.

Uh, the, the, the impetus behind it. So let me kind of go back in history. Um, I actually had this idea and, and, and innovation centers are, are nothing new, right? A lot of folks have had innovation center ideas and built innovation centers. Uh, but if we go back to around 2008, I was working at the army geospatial research lab over at Fort Bellevue, Virginia as a geospatial engineer.

Um, and we really, we had, you know, desktop computers. We had laptops we had, but it was very choppy individual, right. There was no real. Lab space with joint, uh, high performance computing machine learning was starting to take off, um, believe it or not. In 2008, 2009, we were already using, uh, machine learning techniques to, to predict where, um, like we were working with the joint personnel, recovery agency and other government agencies to do things like search space reduction.

So pilot crashes, how do we reduce the search base? How do we reduce, how, how many times a, a PJ team from the air force has to go out and fly back and forth. Looking for that pilot, we were using machine learning for that. We were also using it for, you know, predicting, uh, effects of, um, the environment on, uh, dismounted operations in Afghanistan and things like that.

Um, but we didn't really have a place to do it. So came up with this idea for a, a tactical AI innovation center and we were gonna call it the idea lab. Um, we, we never got it funded, right? So it didn't end up building it. I left the government, went on, worked in Silicon valley at Invidia Invidia actually, uh, was invested in building a federal AI focused lab space in the Northern Virginia region, DC area.

Uh, financial stuff didn't work out right stock didn't, um, take off the way we thought it would at the time, obviously Invidia, we all know the story of Nvidia now. Um, many, many billion dollar company at this point, uh, market caps way in the billions, but at the time stock didn't do so well. We decided not to build the lab.

I left Mir or left Nvidia landed at Mir pitched the same idea. Uh, Mighter been a, not for profit, uh, could not afford to build it. Um, and then finally landed at Okta where may whole San, our CEO, uh, made a promise to me when he was recruiting me that, uh, he didn't promise the funding to build the lab, but he said, Hey, I love the vision.

Let's build an AI innovation center. I'll give you the opportunity to pitch the idea to Arlington capital partners, our investor, uh, and, um, you know, if they agree, then, then we'll build the lap. So he St stood by his word. Uh, let me pitch the idea to Arlington capital partners. They bid off on it. They agreed to fund it.

Um, so yeah, FA fast forward. It's been about two, uh, a little over two years, two and a half years or so since I pitched that idea and had to go through all the ROI analytics and, uh, all the business use cases and all that kind of stuff, um, pitched it to Arlington capital partners and the board of directors at Okta, and they loved it.

They loved the return on investment. And so, uh, uh, yeah, I think last time we spoke, it was just an idea. Uh, we had kind of, we had some plans in place. Um, just four weeks ago we had our grand opening, uh, with Senator Warner, uh, came out. Yeah, it was awesome. And, uh, I'm representing today obviously, uh, always representing O labs.

Um, I was gonna wear the hat, but I figured I'd, I'd actually look presentable today, uh, instead of a work from home scrub. Uh, I actually shaved for you, Ted. Um, but yeah, so, uh, Senator Warner came out, uh, Cynthia Sadie, uh, former executive. She was the CTO over at the, uh, uh, CIA, uh, in her, in her past. Um, we had, uh, uh, Anthony Robbins who runs the federal practice for Nvidia, uh, right.

We all, I think anybody on this, uh, uh, watching this that's part of the veterans and data science community probably knows it. Invidia is, uh, you know, the world's largest and probably most important AI computing company. So we had a bangup group of individuals, uh, come out and help us open, uh, something.

We had something like 185 visitors, uh, but, uh, the most important thing I think you're trying to get to is what is the lab? Uh, it, uh, the, the vision was we will build a tactical AI innovation center. So when I originally, uh, pitched the idea, uh, to our investors, I pitched, uh, I'll just call it a man cave. I mean, quite honestly, it was like, it was gonna be a warehouse style environment with, you know, drones and robots and, um, you know, operators and green suits and special operators.

When I say operators, uh, green suitors on site, um, uh, a shoot house, um, all of the kind of things you need, uh, to build out a tactical environment and test and test new technologies, build new technologies, kind of this, uh, uh, mad scientist environment. We decided not to make it so much of a man cave, uh, and make it more of a comfortable space, uh, like something you would see in Silicon valley, but still very much a working environment, not just a marketing pitch.

Like you see some from, from some companies when they build innovation centers that have, you know, big screens and fancy, fancy screens and good demo facility, but that's all it is. It's a demo facility. That's not what OLAB is. So OLAB is fit and, and stop me. I know I'm talking a lot here, but I get pumped about OLAB.

It is, uh, it's, it's an awesome space and I'm, I'm super excited about what we're gonna do for the federal government and the war fighter and cancer researchers and, and just, you know, so many people doing really important jobs, uh, in this country and around the world, quite frankly. Um, but, uh, it's a 15,000 square foot space.

So, uh, pretty big space. Um, it used to be Walmart labs. So Walmart labs did a lot of their machine learning, uh, research and engineering, uh, right there in, in Reston, Virginia, where we ended up building, uh, OLAB. 15,000 square foot space. Uh, we ended up working with Nvidia pure storage, vast data, uh, NetApp, uh, AWS, and a couple other partners to build out a, um, multi-use multi, you know, functional, um, high performance computing facility.

Uh, it's all glass enclosed. It's a beautiful data center. You walk in through the front doors of OLAB and you're presented with the data center. So, uh, 20 PDO flops of computational power, um, you know, multiple DGX, a 100 S and I'm throwing out a lot of, uh, acronyms here. Um, folks can ask me more about this, you know, these things offline, but, uh, DGX a one hundreds are, are servers built by Nvidia.

We also have a Dell 84 40, I think server in there it's got 10, uh, uh, Tesla, a 100 GPUs in it. Um, so massive amounts of computational power because obviously, uh, it takes a lot of computational power and increasingly, so as we get deeper and deeper, uh, neural networks that we're trying to train and massive data sets that we're trying to cha train, we do a lot of computer vision work, uh, which is in inherently, you know, geospatial data and, and, uh, video is inherently, uh, and images are inherently, uh, large data, right?

Um, so, uh, 15 PDO flops of compute three or so petabytes of flash storage. Um, we've you walk down from the, the data center. You, you keep walking down the lab space and you are presented with a 24 foot. By, uh, by 10 feet or so high, uh, touch led screen that we worked with planer systems out of Oregon. Um, we worked with planer to build that screen.

That was a couple million dollar investment, uh, walk further down, you're presented with a, uh, CQB close quarter battle or simulated shoot house environment. So it kind of looks like a dark room, but it's basically an environment where we can, um, we have really sensitive lighting in there where we can turn down the lights and simulate.

Uh, nighttime conditions, maybe moon lit conditions, cloud cover, those kind of things. Um, and we can do, uh, operationally relevant, uh, night ops testing. We can simulate a shoot house so we can simulate tactical environments where, you know, operators or soldiers might stack up and go into the room. Um, and then we can actually inject our AI solutions, our algorithms and software into that environment and let the operators test, uh, our solutions in a operationally relevant environment.

The next room over is our robotics room. So that's where our. Uh, 3d printers and CNC machine and that kind of stuff are, are being installed. Um, we've worked on programs with the us army in the past one in particular was IVAs the integrated visual augmentation system. Uh, we worked on that program and we still do.

Uh, part of that program was a, a little drone called the black Hornet or SBS. Um, it's a nano drone that fits in the Palm of your hand, tiny little thing, but incredibly capable, built by FLIR, um, and prox dynamics. Um, but we wanted to build some intelligence into that drone, so it could follow troops and, and, um, in urban environments part in particular, uh, and we weren't able to.

Uh, that done with FLIR. So we ended up just taking the drone apart ourselves and kind of reprogramming it to do that. And we realized that we needed our own space to do that kind of work. Um, and then finally, and then I'll stop talking about O labs. Uh, finally there is, um, a new mother's room. So when new mothers are, you know, pumping and doing things like that after, uh, having, having a baby, uh, it's a really comfortable space for them to do that, refrigerators, all that kind of stuff.

Um, I know my wife worked for a, uh, defense contractor at one point in her career when we had our, when she had our first child and they basically put her in a closet, uh, with boxes. It was basically a, uh, yeah, it was kind of ridiculous, but, um, I may, I wanted to make sure that doesn't happen at, at O labs.

Um, and then. Awesome break room and you know, all the things you need, beer, uh, coffee, um, you know, maybe liquor, what, whatever you need to have a good time and, uh, build awesome tactical AI solutions. That's OLAB um, oh, and, and if you don't have any questions about, uh, I'm gonna turn it back over to you, Ted, but if you have any questions about the physical space, we can take those now, but otherwise, um, I kind of wanna talk a little bit about the, uh, uh, you know, the operating, uh, vision, right?

Like why like, cool. It's a, it's an awesome space, but why did you build it? Why did we build it and who sits there? Oh, I will

[00:14:03] Ted Hallum: definitely ask you a question along those lines in just a minute before I get there, I will say it sounds like you covered all the bases. I mean, from the compute and storage hardware to the different, um, uh, the, the robotics lab and the CQB room, which I, I think that part, I hope resonates with this audience.

The most probably because we all know the iterative nature of data science and machine learning. We know that you try something, it doesn't work, you try something, it doesn't work. You have to keep trying and experimenting until you find out what does work and being able to have those that, that lab and that CQB room right there, resident, uh, with O labs, I can see where that shortens the fuse tremendously, as, as opposed to the alternative, having, having to, um, book a trip to go out to Fort Bragg or wherever to test your model, to find out, oh, it's not doing exactly what I wanted to come back to rest.

And , I mean, you you've radically shortened that iteration process. Um, and I think that's fantastic. Um, but having said that, hearing you talk about the process, even before you came to Okta of how you were first, you know, with the army and then you were with Mir and then you were with Nvidia and there were challenges that each of those places that prevented it from being possible until you got to Okta.

Um, but then you also did. Amid COVID and the supply chain issues and the chip shortages, I would just love to hear, since you were on the show last time and all this has actually come to life, um, what challenges that might not be super obvious, did you have to fight through, to, to make OLAB what it is today?

[00:15:42] Rob Albritton: Yeah, so I, I will state the obvious and I will confirm that, uh, supply chain challenges are real. very real, um, uh, believe it or not, uh, there are some, um, uh, Silicon, you know, chips, uh, processors that are built in single they're single factory, meaning there's one location in the world that builds 'em, which is, you know, I, you hear about this, the.

These, uh, you know, risks to the supply chain, but never really believe it. Um, at least that was my, my take on it and I, no way that's, that can't be true. Right. But it's true. Um, we, there's a, there's a, a, a chip, uh, processor that goes into one of the cooling systems for our data center, um, that we literally ordered, uh, I wanna say October of last.

Or November sometime around there. So before the holidays and, uh, we still don't have it, right. It comes from a single place in Taiwan. Um, that that chip has been back ordered for months and months. Um, so that is real, there was a, there was one point. During the process where, um, I can't remember exactly why, but we needed a, we needed two by four S I believe.

And two by four S uh, United States of America, two by four S were back ordered. Um, we couldn't get, uh, lumber for a little while. Um, a lot of that stuff eased up, I will say, uh, quite a bit, uh, over the last few months. Um, so as we got later into the spring, we were able to get more supplies, more easily. Um, but it was still difficult.

And then obviously labor, uh, the labor shortage impacted us. So, uh, the contractors we worked with, uh, there were times where they couldn't show up on time and it wasn't their fault. Right. They just, they didn't have the labor, um, to come do the work. Um, so th those, those things were real. And then, uh, you know, all the moving pieces, we build something like this.

Anybody that's built a new house, which I haven't been lucky enough to, but you know, when I was a kid, I wa I witnessed my parents do it. And I've had friends that have built houses and they all say how much of a pain in the neck it was. And, you know, there's different contractors coming in and all kinds of stuff.

You gotta make so many decisions. Um, that was the exact same thing with OLAB. But I think on steroids, I mean, it was just, uh, incredible, the number of different contractors and laborers that had to come do the different pieces to build a, a lab of this size. Um, every, you know, even just putting in the plainer touch L E D screen, right, building that out.

There was somebody that had to come build the framing. Then there was somebody that had to come put the, the screen in. Then there was somebody that had to come do the, uh, put in the, uh, um, IR sensors, cuz it's a touch screen. So they put IR sensors around the outside. Then there was somebody that did the speakers, somebody that did the software, somebody that, you know, that's just, just, just for the screen functionality.

So that part was hard. Um, but luckily we have a really good facilities team. We have a really good it team and they stayed on top of it. And basically, you know, basically lived within that OLAB environment for the better part of a year, uh, working with the contractors to build out that space.

[00:19:04] Ted Hallum: Well, as you talk through those challenges, I hope that serves to bring into focus, how difficult bringing something like O life's, uh, OLAB to life really is.

And. If it weren't that hard, everybody would have a lab like OLAB right. Like everybody wants it, but wanting it and actually executing and achieving, it are two very different things. Um, I, in fact, leading up to this episode, you mentioned that you, that OLAB has 15 PDO flops of compute. I was kind of interested just in my own mind to like put that into perspective.

So I went back and looked at the history of supercomputers and I found that in 2012, the world's most powerful supercomputer was an IBM supercomputer. One of the national labs called Sequoia. And it was just, just over 16 PDO flops, which, you know, puts it right in the ballpark of what you say that you've got there at OLAB.

Um, so you have as much computer, uh, super keen computing power as was, you know, the maximum in the whole world just a few years ago. Um, and you've got that to put, to work. Against GovCon problems specifically, you know, you've got the war fighter in mind with the CQB virtual shoot house, everything like that.

Um, while there may be other labs somewhere that have as much computing power as you have their OLAB the way that you've combined that with robotics lab and the CQB virtual shoot house, I think that becomes a unique mix that may not exist, or probably doesn't nothing like it exists anywhere else for supporting the federal government.

[00:20:39] Rob Albritton: Uh, I would agree with you, Ted. Um, I think the, the, the differentiator is having all of those, all of those different capabilities under one roof, right? So having the ability to train models, operationally, test them, uh, you know, build out, modify robotics. If we have to. Um, we also have, you know, a facility we use, uh, not too far down the road from us in Louden county, Virginia to fly our drones.

Um, so we have literally everything we need to build test and field, uh, operationally relevant AI enabled solutions onsite under one roof, which is pretty cool. And, and, and you're right. It's amazing how, how far we've come in, computational power over the last, you know, 10 to 20 years. Um, another interesting stat, you know, the.

Invidia a GX Xavier, right? So small embedded GPUs, right? Smaller than a football. I think a lot of us have probably used them. Um, actually at the GPU itself is like, you know, cell phone sized, but the developer kit is, is, you know, half the size of a football or so, um, that GPU has as much computational power as the world's fastest supercomputer in the year 2000 AKI red.

Um, a lot of us have heard of AKI red, right? So, so 20 years ago, right? Uh, that you now have that supercomputer that fits in the Palm of your hand, um, which is why it's so amaz to me, I get so fired up about edge AI and what we can do at the edge, especially for the war fighter that doesn't have computa they don't have the ability to push data back to the cloud.

Right. That's another reason we built on-prem solutions. Um, they're sexy also, but we need it to, um, to, to build operationally relevant solutions. So, you know, the war, fighter's not pushing all their data back to the cloud to do their computational, you know, workloads on AWS, um, instances. Right. So they've gotta be able to do it on site somewhere on.

Primarily, right. So that's why we kind of built it out, built out our compute infrastructure that way so that we can simulate what you might see in a talk or, um, you know, other compute facility, uh, closer to the edge, right. Closer to the tactical edge. And then we actually deploy most of our, uh, algorithmic solutions onto devices like the, a GX Xavier, or, you know, Jetson, NX, um, Tomahawk robotics is a really critical partner of ours.

They have a KXM on body compute, right? Embedded GPU that can clip onto your breast plate. Um, that kind of thing. So I kind of took us off in a different direction, but wanted you to kind of understand why we built, uh, OLAB the way we did

[00:23:39] Ted Hallum: absolutely. Well, so now that we've firmly established the unique mix that makes OLAB so special, I would absolutely love to hear about now that it's built.

And ready going forward, looking out into the future. What types of government problems would you love to see OLAB with its all, its compute power and storage and its ability to quickly iterate on problems. What would you like to see it put to use for?

[00:24:08] Rob Albritton: Yeah. So, um,

you know, now that it's built, uh, as hard as that was now the just as hard part or maybe even harder is getting, uh, government clients, uh, and partners into the lab. Right. Especially I know there was a COVID based question, uh, later, later on, but, um, and I'll speak to it a little bit then more than those challenges, but, um, Uh, OLAB is not just a physical space, right?

It is a space for us to think of it as an incubator. Um, and we kind of flip the, we flip the script on the government, right? Typically you have, um, especially services companies like ours, you have kind of butts and seats quite honestly. So we go sit on site with the government and assist the government on contract.

We flipped it. We said, government, you come to us, right? You come sit on site with the world's best machine learning engineers, um, which is what we're doing with some of our partners. Like, you know, the I've asked folks down at PEO soldier and, and a couple others, um, that have expressed interest in putting soldiers and engineers on site with us at OLAB.

Right. Um, the kinds of, and what that does by the way, is that gives us, uh, more ability to. We get requirements from the, the operational units quicker because we have soldiers or, you know, op DOD personnel onsite with us that have access to those requirements. They have access to our engineers and our engineering facility and our compute and all of those things.

Um, and it enables us to build prototypes and pass 'em back to the, to the operational units quicker for test and evaluation and let us know what works, what doesn't work, that, that kinda, we call it soldier centered design, um, or, you know, end user design, basically human centered design, uh, getting that feedback from the end user as quickly as possible is critical.

And that's what OLAB facilitates, um, the kinds of problems that we wanna. Uh, our, our, our many right. Um, I know I talk a lot about defense work and that is one of our primary focuses. That being said, um, our CTO, CJ, Edward is a exceedingly passionate about, uh, curing cancer. Um, we've all been touched by it.

Uh, I know I was recent recently. Um, so, uh, you know, we, we, we, and we take his passion for it and, uh, we take it seriously and we, we do things like, um, computer vision for cancer cell detection, right. We have, uh, contracts with NIH and NCI national cancer Institute and national institutes of health and, uh, many other.

Uh, health, um, agencies, but we do that kind of work, right? Those are, those are the kinds of problems we wanna solve. Can we, can we cure or at least help, uh, help get closer to a cure for cancer, those kind of things. Um, on the defense side, uh, you know, uh, um, droneswarming, um, I think there's a, there's a question later about that drone swarming, uh, you know, I, I don't, we're not there, but, uh, you know, when I talk about like the black Hornet nano UAVs, um, how do we make multiple black Hornet nano UAVs fly together, um, and, and assist the soldier on the battlefield?

Um, those are just a couple of examples of kinds of problems, but, uh, there are many, many, many, and, uh, one of the benefits of having a space like OLAB, uh, it, it never fails. I mean, almost. A hundred percent of the time, every single time we bring a new customer through or a new, uh, potential customer through OLAB, they come up with a new idea, uh, a question we've never heard a problem, a mission set we've never heard of and new ways to use OLAB, which is, uh, pretty cool.

[00:28:16] Ted Hallum: Now, Rob, I really appreciate you taking the time to take us from where we were or where you were with OLAB in our last episode, back in episode five. And everything that's transpired to get it to the real world instantiation that you have now and everything you plan to do with it and how you plan to get there.

Um, I think there are a lot of people interested in that that are listening to this episode, whether it be that they're a potential client on the government side, or maybe they would like to maybe one day work at OLAB and put their data science machine learning skills to use there. So thank you for that.

Um, changing gears, when I first announced on in LinkedIn group that you were gonna be coming back on the show, you put into a comment that, um, you made you, you said I'm honored to join you again on the data canteen podcast. I hope the questions take us down a weird path. We're talking rogue robots, cyborgs, artificial general intelligence are all fair game.

So I'm gonna take you up on that offer because that sounds like an exciting and wild ride. Um, my first question along that vein is as we look to the. Near to midterm AI enabled systems on the battlefield. I'm curious. What do you think that will look like in terms of, are you expecting to see AI enabled conventional systems like tanks, or you mentioned drone drones forms a second ago.

Do you think it's gonna look more like that? Or? Um, I think we've all seen the Boston dynamics robot. They've got one looks like a little dog named spot. There's another company called ghost robotics. That has something that looks exactly like spot, but it's got a 6.5 creed, more rifle mounted on its back.

do you think that it's gonna look more like that? I think, you know, I'm not gonna ask you to project 10 or 15 years down the road because who knows, right. Things are gonna get crazy. But in the foreseeable future, I feel like that the AI enabled systems are gonna have to gravitate in one of these directions.

I'm curious. Do you think it's one of those that I just mentioned or is you something totally different? I.

[00:30:20] Rob Albritton: That's a great question. Uh, yeah. And, and I said, you know, let's, let's get a little weird cuz you know, that not, not just, not your podcast in, in particular here, uh, uh, the data canteen, but in general, you know, these in the, in the defense industry, they're so oftentimes just business focused and you know, this contract, this and this big Jedi, this has worked a billion dollars and that just, it Boses crap, crap outta me.

Quite honestly, I like let's, let's talk about real stuff that we're all interested in. Right? Like, like killer robots. Um, so, uh, so great question. Um, and, and. I, I actually think that we don't have to coalesce and, and I don't think, uh, one of those is the answer. So I think, uh, I think there's gonna be advances and there already are in many different areas, but, um, I'm so I, I'm gonna jump into that, but I also wanna say upfront, I'm also very realistic, right?

Uh, uh, my wife might say negative. I say pragmatic. Um, so, uh, uh, you know, if you ask yourself, I, we, I had this conversation with some senior leaders out of the Pentagon just a couple weeks ago, ask yourself so far, you know, name a success on the battlefield, uh, that would not have occurred. AI or machine learning.

There aren't any, or, or very few, if you can. I, I would like to know about them. So we need more operational successes. Right? Uh, we need to be able to say we would not have succeeded in whatever, identifying that rare object in Russia, uh, or that object moving in North Korea without computer vision, without machine learning.

Right. We need more of that. So, uh, I'm saying keep dumping money into it, keep focusing on these things. But, um, I'm, I'm very realistic that, that, uh, we have a ways to go. Um, so with, but, but to answer your question. When you talk about, uh, conventional systems. So when we talk about tanks and, you know, armored vehicles and helicopters and things like that, I think the answer is, yes, that is part of the equation.

We are, we are not, in my opinion, we are not gonna see, um, uh, fully autonomous, you know, uh, uh, tanks on the battlefield anytime soon, at least from the us, uh, at least coming outta us, um, military assets, um, that being said, we are really good at bolting new technologies, probably the best in the world. The us have bolting new technologies onto old systems.

Right? So, uh, Abrams tanks, for example, been around forever. But we've modified things like software systems and that, that work on the turret. For example, you know, that seeing an armored vehicle that size and weight moving at 40 miles an hour with the tur, with the, you know, the, the targeting system locked onto a target, sometimes a moving target tanks, moving targets, moving and able to hit that target from whatever, you know, 20 kilometers.

I don't know what the distance is, but I I've seen, you know, demos of it. It's incredible. But that tank wasn't designed to do that. Um, when it was first built, right. That we bolted things onto it and I think that's gonna happen now. Right. So, um, and as we go into the future, uh, especially as we're able to link them together, Um, I know next generation combat vehicle, uh, CFT cross functional team on the army side, um, a future vertical lift, which we've had the pleasure Ted of, of meeting leaders out of that organization, um, all over the army, right?

They're they are passionate about and, and moving towards, uh, putting computational power on their vehicles and linking them together. So I do believe, um, you know, inside of 10 years call it, uh, we will see a battlefield where, uh, every armored vehicle, uh, is on the ground, uh, is connected to computational power on every, you know, Apache attack, helicopter, every, uh, black Hawk, every they're all connected.

They all have computational power, thereby creating basically a supercomputer on the battlefield. Right? So distributed computing. That's what we do in a data center. We have many GPUs, many servers that are connected together. Right. And they create one big supercomputer. So that's exactly what we can do on the battlefield.

So I think that. The direction, right? We're using old systems bolting on new stuff, new coms communications technologies are critical. Um, uh, I won't jump into the 5g, uh, domain because I think it's ridiculous. And if you really have tried to use it in a tactical environment, um, we'll just say it's not there in my opinion.

And, uh, but there are other technologies like, um, software defined radios, you know, made by Silvis and persistent systems and Domo and all these different companies. And man, the, the, the, the, the bandwidth on these radios is incredible compared to what, somebody that may have been deployed just 10 or 15 years ago.

Um, would've experienced. So that's, that's one. Um, let's see. Uh, so drone, swarms. That's a, that's a good one. Uh, I like talking about swarms. I, I think. I think that it is exceedingly difficult, by the way, if you've ever tried to get multiple drones to communicate, uh, you know, avoid collision, um, um, coordinate with each other and accomplish a single goal task together to me though, I asked the question why, um, why, what, what's the concept of operations.

And I still struggle with it quite honestly. I don't really understand in, in my opinion, why there's a benefit to having. 30 or 40 drones perhaps to clear a village or to clear a large area before soldiers or operators go into it to save, you know, lives. But, you know, there are concepts of operation for, for some, uh, automated systems today, um, uh, fully, fully autonomous drone systems where they can launch go into a building, clear that building, but there comes a time and point where your adversary fi.

Catch on to the, the, the, not only the, the concept of operations, how those soldiers are deploying the technology, but also the, uh, technical limitations of those drones, right? Some of them use LIDAR, uh, very easy to, to defeat a LIDAR system. Um, you know, the seventies beads or sixties or whatever it was, where you hang those beads in your doorway, hang those in your doorway.

And you scatter the light. I mean, it's super easy, right? Take a baseball bat to it, take a whatever. Um, in my opinion, we're basically announcing that, uh, you know, Hey, the Americans are here. Um, so I, I, I'm not convinced yet. I'm not completely convinced on, uh, the usefulness of, of drones forms for those type of operations.

I am convinced on the use of, of drones for, uh, refueling, um, supply, uh, you know, uh, moving supplies into, uh, locations that, uh, trucks, for example, wheeled vehicles, uh, can't get into, uh, in desert environments, uh, maybe snow covered environments, those kind of things. Um, and that can be done fully autonomous.

And I think that's, you know, that's being done today by private industry and, um, just around the corner for the military. Um, let's see. And I think was that your last one? Oh, uh, Boston dynamics. Um, and. Absolutely. I think, uh, I, I, I personally have seen when I was working at army, uh, Arctic geospatial research lab, just in the, in the mid two thousands, um, the, the wheeled vehicles, and I think it's called the E uh, Esme or H met something like that.

The wheeled vehicles and things that they were trying to automate and build, um, basically auto, autonomous, uh, You know, uh, pack animals and things like that, that could pack, uh, supplies in and out, um, uh, of rugged terrain. Uh, they were terrible. They were absolutely terrible. There's no way they would actually work right.

Um, fast forward, you know, whatever. It's been 15, 16 years. Um, uh, man, they are, if you've ever seen a, a, a, a spot in action, um, they are incredible. They can carry weight, they can follow soldiers very well. They can, you know, they can traverse rough terrain, move over, uh, fallen logs and rocks and boulders and all kinds of stuff just as well, if not better than humans, uh, in many cases, humans with, with weight on their, on their back.

Um, so I think that augmentation of the, of the, of the, the soldier on foot, the dismounted soldier, for example, is, uh, happening now and, and gonna just increase over the next, you know, few. So

[00:40:00] Ted Hallum: from what you said in the beginning, it sounds like you, you expect, you certainly see the capability to bring AI, to bear on the battlefield in each of these areas.

And you think we're gonna see a smattering kind of across that whole spectrum, not a focus in any one particular area, is that right?

[00:40:14] Rob Albritton: That's correct. And part of it's because of follow the money, right? I mean, the reality is, uh, uh, com startups and companies that are rapidly growing. They're not gonna allow some of these domains to, to go away

[00:40:28] Ted Hallum: sure.

Sure. Yeah. Now I think the next topic was cyborgs. And so when I hear cyborgs, I think, um, taking normal human capacity, And, you know, providing additional abilities via artificial intelligence, robotics, other technologies. Um, so bringing those things and dovetailing them in with our humanity. So when I look at like what Elon Musk is doing over at Neurolink, it's kind of seems like we're headed down that path already, but I wasn't sure when you said cyborgs in your comment, were you thinking like neural link or are you thinking more than

[00:41:05] Rob Albritton: that?

I was thinking more like neural link, uh, at least to start. Right. So, um, just. Just being able to, to control, uh, you know, computers, um, you know, maybe drones, uh, by, with, with, by thinking. Right. And, and, uh, enabling, I think of it in terms of, um, clandestine operations. I'm always thinking about how are we gonna, you know, if I don't have to pull out a AAC, right.

An Android phone or something that's giving off light, uh, that's a tremendous benefit to me on the battlefield and being able to operate at night. And so if I have something that can, you know, connect to my head, maybe not so keen about embedding, you know, uh, electronics into my brain, but, uh, you know, maybe I'll, I'll, maybe I'll, I'll, I'll warm to it soon.

Uh, but that's what I was thinking. Um, but also, and I think more importantly is, um, um, like. For use in, in recovery of, of traumatic brain injury and, um, you know, uh, being paralyzed, right? Losing limbs, things like that. Um, there have been some successes over the last few years. Um, and examples of, um, I think there was one in.

I'm probably gonna get the con Switzerland, maybe. Um, but there was, uh, a success recently where they were able to, researchers were able to, uh, use an implant to enable a, uh, quadriplegic, um, man to, uh, communicate right. He, he was able to, uh, communicate and. He could spell out. He had to do it letter by letter, but he spelled out an entire sentence and, and ordered food.

Right. Um, using no words, he didn't speak at all. He didn't write this down. It was just, uh, using elec, uh, you know, electrical waves from the brain. Um, and they translated that. Um, so that kind of thing is amazing to me. Um, if you've ever been to the warrior, uh, rehab facility over at Fort Bevo, right.

Amazing facility, where they're they take warriors from Walter Reed and then help try to integrate them, teach them how to live, right. Teach them how to live without limbs, for example, um, those kind of things where you have perhaps, uh, perhaps you have your limbs, but you're paralyzed right from the waist down because of an I E D or a rollover rollovers are exceedingly dangerous and combat and happen quite often.

And cause a lot of injuries, right? A lot of. Para paralyzed individuals, unfortunately. Uh, but implants, uh, in the brain have the potential, um, to help these people walk again. Right. That kind of thing.

[00:43:59] Ted Hallum: Absolutely. Now the next that rogue robots. So, um, here are the west, uh, the United States and our allies. We have taken a pretty conservative approach to AI enabled systems in the battlefield.

We always wanna have a human on the loop. Um, at least that's where we all are right now. Of course there are other nations in other parts of the world that have, uh, larger appetite for granting more autonomy to AI enabled systems on the battlefield. But even those countries have a sense of self preservation.

So, uh, you know, they wouldn't want a system to, to be too rogue. Um, so when you say, uh, rogue robots on the battlefield, I'm curious what you mean and how significant of a threat you think that is to. Immediate future. And what kind of time horizon do you think we might really need to be worried about that?

[00:44:52] Rob Albritton: Uh, yeah, so, so first I think part of the question revolves around, uh, fully autonomous systems and fully autonomous weapon systems. So, um, you, you probably know I'm gonna get this into it, cuz I say this all the time and it's. um, if there are folks at the Pentagon listening, they probably don't like it very much, but, uh, we do at least in the Western world, and especially in the United States policy, policy, policy, AI policy, this AI direction, this and these 10, you know, standards and this and that and policy for what you need to develop capability first.

I mean, what are you, you know, policy around what you've gotta have a PO a, a capability, uh, that you're, that you're building these rules around. Right. And unfortunately, we don't have that in my opinion. Um, so, but, but there's a mix. Uh, there has to be a mix. There has to be, um, some of both, right. You can't have completely rogue elements, building fully autonomous weapon systems with no rules.

Right. That's just not, uh, that's not the Western way. It's not the American way. Um, but I think, uh, somebody, somebody asked, um, I think it was on LinkedIn. Uh, you. Um, when is, I don't have the question exactly, but it was something to the effect of when is a, a robot rogue and who is it? Rogue too. Right. And, and my answer was.

If I've trained that robot to do something, uh, specific, and it does that, it's not rogue, right. But if an adversary injects, uh, malicious code into it or deceives it in some way, um, we could go down, you know, we could go down the adversarial attack path, but there are ways to deceive computer vision algorithms in particular.

And, um, NLP models are, are pretty easy to deceive and poison. Um, and that robotic system does something rogue to me. I'm the original programmer. I'm the original guy that built that robot and, and trained the models. Uh, but that robot's rogue to me now, cuz it's doing things I didn't tell it to do. It's not rogue to the other guy.

It's doing exactly what the adversary wants it to do. So that would not be a rogue, uh, a rogue robot. Um, I think, uh, Another way to define, you know, for me, what, what is rogue could be, uh, very simple. It could be I've trained. And I do think we need to get to, if not fully autonomous close to autonomous weapon systems.

And that is the only way we're gonna shrink the kill chain, right? The, the time it takes from us to identify a target, to actually launch we'll just say, precision precision fires onto it. Um, and actually take that target out and then confirm that it was the right target. That is a excruciatingly long process today for the us.

Right. And for really all, all Western, uh, uh, militaries, um, some, some of our adversaries Russians, for example, uh, no qualms whatsoever about using fully autonomous weapon systems, striking signals of interest that they don't know is actually cell phones. For example, uh, identify a cell phone that may or may not belong to somebody they're trying to take out a high value target and they will strike it without, you know, being able to see it and being able to confirm.

We don't want that to happen. We want our systems to be reliable, but if they were to hit the wrong target, that's a rogue system to me or could be right. That, that, that model has been trained incorrectly or learned in incorrectly, uh, that a hospital looks like a, uh, you know, weapons warehouse that somebody's storing weapons in and it strikes the wrong target.

So to me, that would be a rogue asset. Um, and potentially when we get into, uh, active learning and continual learning and some of these newer paradigms, right? Especially active learning where the system's labeling its own data and training itself and retraining over and over again. Um, as computational power increases, data increases and models become more powerful and able to retrain themselves.

Uh, we could see more and more of that happening if we don't put the right, uh, protections in place to.

[00:49:22] Ted Hallum: absolutely. When I think about if we don't take precautions, um, I guess just really what would be considered normal cyber threats today. Um, and once you have that active learning and everything in place, an adversary injecting, poison data into your training set really wouldn't be that difficult to do if you haven't been intentional about making sure that, that people can't, people who shouldn't have access don't have access.

Right. That's right now, I think the last thing that you mentioned in the, um, take the weird path comment was about artificial general intelligence. And I think this is super interesting because when you look at the world's best research scientist, there's a huge spectrum. There is one end that says, I, we don't think this.

Ever within the grasp of humanity to create fully, um, what would be defined as fully artificial general intelligence? Uh, at the other end of the spectrum, there are re research scientists who say, yeah, we absolutely think it's possible, but then amongst that camp, um, it ranges anywhere from like, we'll have it in 15 years to, oh, we think more like 60 years.

So along that vast spectrum, I'm curious to find out where, um, you fall and, um, how quickly you think we'll get there, if you think it's possible.

[00:50:43] Rob Albritton: Um, so I, I reserve the right to change my answer based on, uh, new data, uh, that comes out at any time. Right. Um, but right now with what I know about the topic, um, I don't at least in my lifetime though, I'm 40, right?

Let's let's hope I live to be 80, 90 years old. I do not believe. Uh, strong AI is possible in, in my lifetime a as, as most of us define it. Right. Um, being, being truly cognitive and being able to think and make decisions that I ne that that model necessarily hasn't been trained per exactly to do right. To on data, on data, you know, that had on a data of occurrences that have already happened right.

As we train models today. Um, I, I, I, I don't think it's, uh, you know, in the next 40 years, um, someday, perhaps, um, I, and I, and I don't think it's gonna be, uh, I don't think it'll be technology today's, uh, lemme say it this way. Today's artificial neuro neural network paradigms, uh, will not drive it. That will not be what drives it.

It will be. Something like spiking neural chips, um, you know, that are a little, they mimic the brain a little better. Right. Um, and are able to, uh, you know, over quite frankly be are, are far more powerful than what we have today. Right. So, um, I, so, so yeah, someday, um, I just don't think in, in our, in our lifetimes, um, I, I use, you know, I, I think about self-driving cars, um, and, uh, I use the Nvidia example all the time.

Right. NVIDIA's Saturn five. And then I think they built a new supercomputer recently, but I can't remember how many DGX, a one hundreds Saturn five has, but let's call it many hundreds, maybe a thousand. Right. Um, so. Almost enough computational power that I can't wrap my brain around it. Right. It's just incredible amounts of computational power.

And they use it every single day to train. Uh, they have a, a self driving card network called, you know, BB eight, uh, Bravo, Bravo eight, and they train their models, uh, on, they simulate something like a billion miles of driving every single day, 365 days a year. And that's not enough to simulate every possible outcome in a human's lifetime.

They could train every single day, a billion miles, and that's still not enough to, to simulate every single possible outcome for a self-driving vehicle. Um, so when I think about that kind of problem and that many infinite, you know, Almost possibilities. Um, I just can't wrap my brain around and, and, and AGI, uh, being able to do those kind of things, drive a vehicle perfectly, for example, uh, without human in any human interaction.

Um, I just, I have a hard time, uh, believing it in, in the near future

[00:53:58] Ted Hallum: we're we're in agreement. I, I, I think that one day it might be possible, but certainly not with our current frameworks and paradigms that we have in place. The artificial neural networks that we use right now are, are phenomenal. It's amazing what you can do with them compared to the previous machine learning techniques, uh, you know, pre like 2010, but, um, as far as getting us towards artificial journal intelligence, I think it's gonna take another, or perhaps several of those sea change moments or advances where we just leapfrog way ahead and that's that, that's what will finally get us there.

Um, Now to shift gears a little bit, we had some pretty cool questions submitted by some of the data canteen's listeners. So I'm gonna start throwing those at you if that's okay. Yeah. All right. So Glen Ferguson asked, he says lops becoming a profession was the last big change in data science. Do you see any trends now that could drive the next big change in data science as, or do you think we're finally outta plateau?

[00:55:00] Rob Albritton: Good question. Um, if you look at, uh, uh, you know, number of papers being submitted and, and, and things like that, uh, and in some, some newer, deep learning domains, they have, I wouldn't say it's plateaued, um, slowed maybe a little bit, but, um, plateaued. No, uh, I, I don't think so. Um, I still think there's room, uh, within machine learning ops, quite frankly.

Um, Ted I'll plug some of the work that, you know, you guys are doing at, at uh, OLAB uh, you and your team, I mean, on, on, on drift detection and model performance, uh, monitoring and those kind of things. Um, you know, the, the world's best ML ops teams haven't figured it all out, right. There are new ways to, to do that.

So I think there's room to grow, uh, within ML ops in particular, in particular, on the right side of the ML pipeline. Uh, so model performance monitoring. Um, and then naturally I think the next phase is, uh, completing the loop. Um, so active learning, continual learning, um, active learning, especially intrigues me and, you know, admittedly, I'm not an expert.

Uh, however, uh, do have a little bit of, you know, experience with. With active learning and, and, uh, we are actively, um, um, working on active learning solutions at O labs and trying to put the, that, that those solutions into the war fighter's hands. Um, but, uh, I really think, um, that is the next explosion. If you will, in the ML, uh, um, workforce or ML, you know, where, where the desire for skills will be, will be in the ability to, uh, keep.

ML models performant. Right. And that includes, uh, building out solutions, uh, using things like active learning. So, uh, if I were, you know, if I were trying to, uh, build out my portfolio and my skill set that that's, that's probably the domain I would focus on most. Um, uh, it's incredibly valuable to companies to we've been through this explosion.

If you think about the D O D department of defense, for example, very, very focused on, um, the training piece, right? More efficient training. How do we train a better model? How do we get it more performant? but then not a lot of focus on, okay. Once we detect a scenario or performance drop off, what do we do about it?

Right. So if you could plug that hole, uh, fill that requirement. I think you could kinda write your own paycheck if you will. Uh, because the demand is gonna be, um, just incredible for those kind of skill sets.

[00:58:00] Ted Hallum: All right, listeners, I hope you're furiously writing notes. It's, uh, stuff that can make you, um, much more, uh, desirable when you go to an interview or put your resume out there.

All right. So Rob, our next question comes from Carlos Rodriguez and he says, I'd like to hear Rob's perspective on digital transformation host COVID with a focus on risk and opportunities of process automation and the current automate everything craze.

[00:58:28] Rob Albritton: Uh, yeah. So, um, So, so it did, I, I, I do think COVID sped, you know, sped things up a bit.

Um, you know, there were trends that were already occurring, um, you know, work from home trends and things like that. Right. And I think COVID, uh, shut downs, just kind of sped that up and made it made a lot of it more permanent. Um, when you talk about automation, um, I, I will in, in full disclosure, um, you know, prior to COVID I was, I was in the camp of, and, and always saying, you know, don't worry, you know, robots are not taking your jobs.

AI is not taking your jobs. Um, and. While I still, I think that's true, right? Robotics haven't stolen jobs per se, but through necessity, uh, we are seeing, uh, increased speed with which companies like Amazon, right. Are adopting fully autonomous warehouses, for example. And that's not necessarily Amazon's fault.

It's not Amazon saying you guys are out or firing you. You're done no humans. It's just the way our, our demographics have shifted in this country. Uh, economics have changed. Um, people are realizing maybe that they don't need to, uh, there are better jobs available to them, or they wanna spend more time at home with their families and they don't wanna work in a warehouse every day.

And so increasingly we're seeing folks quit those kind of jobs. And as they do that, companies like Amazon are taking that as a opportunity to automate their warehouse. Um, so I, I don't know. I don't think I'm answering your question completely, but that's been what I've observed and, um, I, I, I don't see it as much on the knowledge side, I will say.

Uh, yeah, digitize everything, cloud everything. Um, it, it, it amazes me, uh, how often we hear today, uh, you know, our, our government customers, um, Some of them shifted and adopted, you know, all cloud solutions. Um, let's say, you know, four or five years ago, and just now they're realizing how much that costs, right?

Like, oh my God, I, I, you know, us, uh, at, at O labs, um, there was a point where we were regularly spinning up, uh, $30,000 a week, uh, training jobs, right. And training massive deep neural networks to do some of our work for the department of defense and the department of defense can't pay us enough to keep that going.

Right. So, um, that was another reason we built our own on-prem solutions, but, um, I do think there's some backlash, uh, and maybe some, um, you know, whip saw, I guess it is back in the other direction, away from cloud. And that will increase and that's not necessarily the right answer either. There's gotta be a balance and a combination.

Um, you know, this hybrid solution, right. Where we do some on-prem some in the cloud, um, that kind of thing.

[01:01:39] Ted Hallum: Yeah, absolutely. I think at least for the foreseeable future, as many jobs as are going away due to automation, I see it creating opportunities for new jobs. You have to be willing to learn and get a different skill set, but usually for every job that's displaced, there's some new type of job that gets created.

Um, because all that automation, somebody has to be building it. And so there's a lot of opportunities in that arena, um, which should be of interest to this audience. And then also absolutely to your point, Rob, um, if you're spending 30 grand a month in cloud instances, you don't have to do that for too many months before you look back and you think, wow, I could have bought a lot of GPUs and then I could use them to my heart's content and not be paying for them anymore.

So. All right. Um, for the next question. Well, actually the next two questions, these come from Gabriel Riley. His first question is he says typically a model can't predict something that hasn't already occurred in its training data with a super computer. Does that become possible? I would assume a well-trained neural network could draw inferences across disparate features, but is that practical in a production environment?

[01:02:50] Rob Albritton: So, uh, I, I could be proven wrong and, um, somebody could surprise me. Uh, but I think the answer is no. Um, I don't really think, especially again, I'll, I'll blame it on today's, you know, paradigm and, and where we are today with. Yeah, we have mass, we have incredibly deep. You know, many, many layered, uh, deep neural networks.

Um, but, uh, there has to be examples and the boundaries and rules, finite possibilities. Um, so I, I, I think the answer is no, um, even with the world's fastest, super computer, I, I don't think so. Um, however, uh, oh, or, or adding to that, right. Um, is it, I think you said operationally feasible, is that what you asked?

Um, uh, he

[01:03:40] Ted Hallum: says, is it practical in a production environment,

[01:03:44] Rob Albritton: practical in production environment? Uh that's even more, no emphatically? No, definitely not. Um, uh, what is, what is practical today is, is, um, ML ops. I I'll go back to ML ops over and over and over again. Right. Uh, in a production environment, you wanna train your model, you wanna be able to monitor your model and you wanna be able to do something about it when your model doesn't perform well as quickly as possible, whoever cracks that code first wins right in business.

Right. So if you' think about things like pharma, right, and the many, many iterations of, of potential, uh, you know, uh, DNA sequences and things like that, you have to run through to create a vaccine today, right? Um, take the COVID vaccine, right? They weren't jabbing people's arms over, over the, uh, the year that they developed that, right.

That was using machine learning, predictive analytics, uh, high performance computing to iterate over and over again. Um, whoever can monitor their model and tweak it fastest on their supercomputer wins in my opinion. So. Yeah, I'll stop

[01:04:57] Ted Hallum: there. Yeah. Now when it comes to, I mean, right now, we're still in the era where supervised machine learning rules day, and these are, um, and these are interpolation machines, not extrapolation machines.

And a lot of times when people talk about a well journalized model, I think the, I think we, our human mind tends to drift towards extrapolation. That's what we want, uh, our AI to do. But the reality of AI right now is it can perform well within the bounds of what it's seen in the training data. And it doesn't matter how much compute you have.

You're not gonna overcome that. And I think that's a, we see a Testament to that in these massive, um, natural language transformer models, like GPT three, where it's like 16 billion plus parameters. Um, and they want to get that all inclusive. The model can't be fooled type of performance. um, and the, the best approximation to that is just growing the, the, you know, larger and larger and larger training set.

And then of course you start training a model on that and, you know, massive amounts of compute, um, you know, GT three is not even a, it's not even a model that you could like train on your own. You have to it's it's developed by, um, is it deep mind or open AI? One of the two, I can't remember. Yeah. Um, and you actually have to pay them to like, get access to the API because you know, it only runs on their stuff because they spent 10, 10 million training in or whatever.

Um, but, uh, Gabriel's second question is, um, with the number of subdisciplines in data science, and he says, machine learning, analytics, database engineering, et cetera, what would a hiring manager like yourself recommend that those who new to the field should focus their energy on, especially considering the current environment where products offered by companies like hugging face and DataRobot are providing machine learning services at scale.

[01:06:43] Rob Albritton: Yeah. Um, I feel like I'm a, I'm a, I, I, I keep repeating myself, but I will, again, ML ops. I mean, just, you know, being able train training a model is one thing. Uh, but it. You know, as invidious as democratization of, of AI, right? That's what they push and that's kind of happened, right? I mean, plenty of, plenty of high school kids and, and even younger, uh, now able to use open, open source libraries, you know, open APIs, open this, open that to train models.

Um, and a lot of it's becoming it's becoming easier and easier. Um, in general, I, I know I'm, I'm generalizing a bit there. Um, but maintaining the model's performance is, is still hard. Right. And, um, when you look at, uh, you know, DataRobot was used as an example, um, we know plenty of folks at DataRobot. I do personally, um, um, have, have worked with them, uh, since the early days.

And, um, uh, you know, this is not a knock at them, but like many other companies, like DataRobot very auto ML focused, right. Turn everybody into a machine learning. Um, okay, great. Now what do I do, right? How do I employ that model? How do I, you still need the operational domain expertise? That's one thing I would say don't forget.

Right? Um, if you can tr there are so many folks, I think now that think if I just hire the best machine learning engineers, I'll win, right. I'll be able to build the best solution you might build the best product or an awesome product, but is anybody actually gonna use it? Right. Um, so you still need that domain expertise.

So don't forget about that. So I always say, you know, having a combination of technical acumen, if, if depends what you're looking to get into, right? If you just want to tr do you know, keyboard, hands on keyboard, uh, technical work, Yeah, fine. Get as deep in the weeds and learn as much about, um, you know, that technical domain as possible.

But if you wanna, you know, expand your horizons, maybe get into management or build solutions and, and interface with customers. Um, I think having other skills as well to combine with that. So, uh, maybe business skills, um, maybe management skills, maybe. But, but also just domain expertise, right? Being a, there are plenty in this, in this, uh, uh, veterans in data science, machine learning ecosystem.

There are a lot of us that have been Intel analysts like Ted, or even, you know, trigger pulling door kicking operators, right. That have domain expertise. And then you learn, uh, machine learning and data science skills. On top of that, you can build solutions that, you know, are applicable to that, that mission that others can't do.

And that is a differentiator that not a lot of folks in this community outside of, you know, veterans, for example, have.

[01:09:47] Ted Hallum: Yeah, absolutely. I, um, I, that brings my mind back to a virtual coffee call that I had in the community just a week or so ago. Um, with one of our members who his background, his undergraduate background was in nursing.

He, his, um, focus when he was in the army was in the medical domain. And now he's gotten into, uh, he's working on his masters, I think in software engineering, with a emphasis in machine learning and talking to him, some of the ideas that he has to apply data sites and machine learning to the field of medicine and patient care are just phenomenal.

He, he thinks of things that are, uh, just a plain machine learning engineer would never think of because he has that hands on experience, working with the medical field, working with patients, and he knows what the biggest pain points are. They're obvious to him in a way that they wouldn't be obvious to somebody that hasn't, that doesn't possess that domain expertise.

So. Yeah. Um, I, I, that was vividly clear to me in that conversation where I was just riveted in like drinking my coffee and saying great little and just listening. Um, so, okay, Rob, as we wrap up, one of the questions I love to ask is the unask question question. So as we've talked and I've thrown my questions at you and our listeners, that certain questions at you, what is the elephant in the room question that we didn't think to ask that we should have asked?

[01:11:07] Rob Albritton: Yeah, so I'm actually very surprised, uh, that this group didn't ask anything about, uh, adversarial ML attacks, right. And, you know, deep fakes and all the, the, the, the, the, I guess the, the dark side of, of, uh, machine learning that we're hearing about today, uh, nobody asks any questions about that. Um, what's, what's probably

[01:11:34] Ted Hallum: gonna be the catalyst behind the road robots, right?

[01:11:37] Rob Albritton: Well, exactly right. Um, and, and offensively too, I think about, um, you know, we worked a lot of counter AI, we called it at, at Mir. Um, so, so, um, you know, how do we also deceive adversarial, um, models, right. And, and systems that are using, uh, you know, algorithms, right? How do we deceive them? Um, um, so I'm surprised nobody asked anything about that, but, um, you know, I, I.

Uh, it might be a little bit extreme, right. To, to say it could, it could set off, uh, you know, a deep fake could set off a, a world war II or, you know, some might argue that we're there right now, but, uh, another, you know, conflict. But I, I do believe it's possible there. You know, when you think about, um, insular nations, like Russia or a North Korea, or, you know, places like that, if they were to fake an event, um, let's say, uh, a a, and not just a, a person saying something, right.

Not like the deep fakes we've seen of, of president Obama and Trump and those guys saying things that they didn't actually say, but they look pretty darn realistic. Um, Things like, you know, maybe the assassination of a, of a world leader, right. If they were, if they, that was, were to be faked, um, but realistic enough, uh, there are factions, you know, and, and, and nations around the world that may take action.

Um, and that could set off, uh, conflict. Right. So that kind of thing, and internally, right. Um, you know, maybe it's, uh, you know, um, you know, a lot of, lot of stuff over the last few years with, uh, police brutality and, and things like that, right. What if somebody, you know, uh, FSB or somebody, you know, some group, uh, around the world, another nation, uh, faked, um, an incident that didn't actually happen, but sets off massive protests around a country, right.

And riots. Um, those kind of things are, are very. Quite quite honestly, I think very serious and very realistic and very possible, um, today. And, and especially over the next few years and we should probably invest more in, uh, detecting those fake, um, scenarios, detecting deep fakes and, uh, countering them in some way.

[01:14:07] Ted Hallum: So earlier, um, it was clear that you felt like cultivating a skill set in lops and active learning and things like that was a, a clear winning way to go. If you wanna be in demand. I'm curious, um, when it comes to adversarial attack detection, On the defensive side and maybe the, the skill set to engineer adversarial attacks on the offensive side, where do you think those types of skills stack up in comparison to their Val you know, their value in comparison to lops active learning that stuff we talked about earlier?

[01:14:44] Rob Albritton: Um, I think, I think, I think active learning ML ops, uh, ML ops with active learning in it, I'll say or part as part of it, a component of it is probably more, um, applicable to a wider audience basically to every single, you know, every single company firm, you know, an organization using ML needs those skills.

Right. Whereas I think, especially on the offensive side, when you talk about, uh, adversarial attacks, um, uh, that's probably a little bit more narrow, right? Because I don't. Think you've got Google, uh, committing offensive attacks. I don't think , but I, I, I could be surprised. Um, uh, but yeah, that would be more, you know, focused on intelligence, community defense, uh, applications.

Right? So near fewer employers, um, uh, probably on the defense side as well. Uh, you're probably looking at more, um, you know, law enforcement, um, IC, uh, and D O D but, um, you know, uh, just like cyber attacks are impacting, um, just about every large firm around the world. Right, right now, uh, or at least they're at risk.

Um, uh, the same could be said about, uh, adversarial, ML attacks, um, and data poisoning and, um, especially insider threat. Every company should probably have an insider threat program because, um, you know, Invidia, we, we relied on machine learning platforms, ML, ops platforms, things like that to run every day, you know, financial systems and every day, um, everything we use to, to basically keep the, the company running smoothly.

Right. Uh who's to say that an insider couldn't inject poison data into that system and really damage the company. Right. So, um, I think we should all be thinking about that and. Um, there's probably, uh, once, once every, once these firms figure it out, they'll probably be hiring a lot of folks with those kind of skill sets.

[01:16:55] Ted Hallum: Well, I think you're absolutely right that in the, in the present moment, there's probably not a huge demand signal for those, uh, defensive and offensive adversarial attack skills. Um, but even over the short term, uh, my last follow up question to you would be, I know that in your, in your role, um, as the, the director for the AI center of excellence in Okta's, OLAB, you've looked at a lot more machine learning engineer resumes than I have.

Um, so I'm curious those types of skills, how frequently, uh, maybe you've seen them. And if, just to imagine, if you were needing to hire somebody with that skillset, how hard you do you think it would currently be to find? So I'm guessing that while the demand signal right now is limited, if you possess those skills, you might be still in high demand at the.

Key agencies that are looking for that or whatever.

[01:17:43] Rob Albritton: Yeah. Spot on. Um, I, I, I do and have reviewed a lot of resumes and, um, very, very few. If any, like, let's just say, if I've reviewed 200 resumes over two years, uh, maybe three have had those kind of, those, those adversarial. Um, both detection and any, any skill sets in that domain.

Right. It's just not something, um, that I've seen much of. Uh, it seems to be small pockets of experts, right. Um, especially at, you know, FFRDCs, you know, federally funded research and development centers, U arcs university, affiliated research centers, academia, those kind of places, um, uh, seem to have the most talent in that, in that domain.

So it would absolutely be a differentiator. Um, if, if someone had that, I, I would also say we see I've seen, and it was quite frustrating time and time again, you know, folks, defense, AI solutions, or manager or defense machine learning. Uh, engineer right position that we were, we would be hiring for in OLAB and we would see, and folks would apply for those positions, but they had none of that domain expertise that we talked about earlier.

Right? So, you know, machine learning, building machine learning solutions for pristine academic perfect data, uh, sets on those kind of data sets is much different than building it on a, a predator Reaper, black horn, you know, shadow, you know, these kind of just messy data sets that often have burn and all kinds of different, you know, issues with them that make them hard to work with.

Right. It's a totally different and, and working on different, uh, networks and just a totally different, uh, environment building, machine learning solutions for, uh, defense customers, vice you know, the public's public sector or private

[01:19:45] Ted Hallum: center, private sector hundred percent. Yeah. Rob, thank you so much for.

Being so generous with your insight and your time coming back onto the data canteen again, uh, to have this conversation with us, I think that this has been incredible for those who tune into the data canteen all on a regular basis. I think the last thing we haven't covered is, um, Holy cow, people could wanna reach out to you for like a hundred different reasons after this conversation.

So I've got your LinkedIn username there. It's been, uh, beneath your video this entire time, but, um, I'd, if you've got another preferred means of contact, I'd love to get that out there so that people know how to reach out to you if they're interested in OLAB or any of the other topics that we've talked about here.

[01:20:26] Rob Albritton: Yeah, please. Uh, LinkedIn would be preferred. Um, so reach out on LinkedIn. please know, advertising just hit me up. If you wanna learn about OLAB come visit. OLAB, uh, we're happy to bring folks through, um, if you're a us citizen, of course.

[01:20:44] Ted Hallum: And that's it. So Rob again, thanks much for coming on. And, uh, we look forward to the next time.

[01:20:51] Rob Albritton: Thanks, Ted. Appreciate it

[01:20:53] Ted Hallum: Thank you for joining Rob and I for this conversation and accompanying us down a weird path of fast ending questions with that until the next episode I bid you, clean data, low P values, and God speed on your data journey.