The Data Canteen: Episode 05

Rob Albritton: AI/ML Hiring Manager

 
 
 

Show Notes

In this episode, I chat with Rob Albritton, Senior Director for the Artificial Intelligence Center of Excellence within Octo's oLabs. Rob and I have a wide ranging conversation about the cutting edge work being done at Octo, why especially veterans should consider opportunities at Octo, Rob's personal data science journey/background in the Air Force, his insights as a hiring manager for AI/ML teams, and finally the current AI race and what keeps him up at night.

 

FEATURED GUEST:

Name: Rob Albritton

LinkedIn: https://www.linkedin.com/in/robalbritton/

 

SUPPORT THE DATA CANTEEN (LIKE PBS, WE'RE LISTENER SUPPORTED!):

Donate: https://vetsindatascience.com/support-join

 

EPISODE LINKS:

Octo's Career Opportunities for Veterans: https://tinyurl.com/ape4xc

Octo: https://www.octoconsulting.com/

Octo's oLabs: https://tinyurl.com/yyha236j

oLabs' AI Center of Excellence: https://tinyurl.com/kt3zpp2u

MLFlow: https://mlflow.org/

Kubeflow: https://www.kubeflow.org/

How to deploy Kubeflow with minikube on a single node: https://tinyurl.com/myyyd2ht

Lex Fridman Podcast: https://tinyurl.com/5dpbk67f

The AI Podcast (Nvidia): https://tinyurl.com/axphyb6a

 

PODCAST INFO:

Host: Ted Hallum

Website: https://vetsindatascience.com/thedatacanteen

Apple Podcasts: https://podcasts.apple.com/us/podcast/the-data-canteen/id1551751086

YouTube: https://www.youtube.com/channel/UCaNx9aLFRy1h9P22hd8ZPyw

Stitcher: https://www.stitcher.com/show/the-data-canteen

 

CONTACT THE DATA CANTEEN:

Voicemail: https://www.speakpipe.com/datacanteen

 

VETERANS IN DATA SCIENCE & MACHINE LEARNING:

Website: https://vetsindatascience.com/

Join the Community on LinkedIn: https://www.linkedin.com/groups/8989903/

 

OUTLINE:

00:00:07​ - Introduction

00:01:12 - All about Octo, oLabs, and the AI Center of Excellence

00:08:01 - What makes Octo so unique in the GovCon space

00:11:10 - What makes Octo a great career fit for veterans

00:13:56​ - Domain experience (i.e., veteran's military experience) is important to Octo's mission

00:16:43​ - Octo's Program to Hire Veterans

00:17:45 - Rob's Journey from the Air Force to DS, ML, and AI

00:22:44 - You don't necessarily need a Master's Degree to get into DS/ML

00:24:04 - Rob's insights as a hiring manager for AI/ML teams

00:30:56 - Do you need a U.S. Government security clearance to work at Octo?

00:31:58 - How to best frame your DS/ML skills to get an interview

00:36:02 - Rob's recommended upskilling path to get into DS/ML

00:39:42 - The current "killer skill" to put you in high-demand (MLOps!)

00:42:12 - Rob's advice about how to start learning MLOps

00:54:42​ - What scenarios keep Rob up at night?

00:55:13 - Rob's favorite podcast recommendations

00:56:46​ - How to contact Rob

Transcript

DISCLAIMER: This is a direct, machine-generated transcript of the podcast audio and may not be grammatically correct.

Ted Hallum: [00:00:07] Welcome to episode five of the data canteen. Today, I speak with fellow community member, and Senior Director of Octo's Artificial Intelligence Center of Excellence, Rob Albritton. We have a wide ranging conversation about the cutting edge work being done at Octo, why especially veterans should consider opportunities at Octo, Rob's personal data science journey and background in the Air Force, and finally the current AI race and what keeps him up at night.

For the sake of full disclosure, I'm a machine learning engineer at Octo. However, this is a great opportunity to point out that this podcast is completely listener supported.  I haven't been compensated in any way for doing this interview and no one has reviewed or approved what you're about to see. All Data Canteen guests are objectively invited for one simple reason: They possess valuable advice to inform your data science journey and they care about veterans. Today's episode is no exception.

Hey, Rob, welcome to the Data Canteen. I just like to ask you, so I want to know a little bit about Octo, oLabs, and the AI Center of Excellence. You and I have talked about it a little bit before. It sounds fascinating. I'd love to hear the whole story.

Rob Albritton: [00:01:10] Thanks Ted. Thanks for having me on here.

Yeah, so I've been with Octo since January of 2020. So, about a year now - a little over a year. Prior to me joining, we didn't really have an AI practice. Octo traditionally has been a company focused on Agile and DevSecOps. At one time, we were the largest contractor up at Kessel Run, the Air Force's software factory up in Boston. We have some of the world's foremost experts in that field and, especially on the Agile side, but we are rapidly moving into more emerging technologies. And I wouldn't necessarily even call AI and emerging technology that kind of bugs me when people say that because guys like us, we know that AI has been around since at least the 1950s, 56 or whatever it was at Dartmouth. But, the types of AI and machine learning and data science that we do today, is emerging. The amount of compute power we have is different. There's a lot of different things about it today. The point is we are investing heavily at Octo in those emerging technologies, especially AI, machine learning, data science, data engineering, those kinds of things.

oLabs specifically is the research and development / prototyping arm at Octo. Let me go back just a sec. Couple of facts, Octo has about 1,100 employees. So, we're a pretty good sized firm now. We've grown extremely rapidly over the last 18 to 24 months from just a couple hundred employees to 1,100 or 1,200.

That's explosive growth, and most of that has come through mergers and acquisitions. We merged with a company called Connexta that built the DCGS  integrated backbone, the DIB. We merged with a company called Sevatec just a few weeks ago actually, right around the holidays.

With those acquisitions, we've exploded in size and reach really throughout the Federal Government. So, pretty awesome to see that happen. We've also invested heavily in oLabs, right? That research and development organization. We are literally building a physical instantiation of it right now in Reston, Virginia - a massive 14,000 square foot facility dedicated initially to AI research and development, specifically targeting government problem sets. We've all heard the term "DARPA hard". When you have a DARPA hard problem, you have a problem that just about nobody else on the planet can solve. It's been an intractable problem that we've been trying to solve for a very long time.

Those are the kinds of problems that we want the government to bring us at oLabs. We have what I believe are some of the world's best machine learning engineers. We've hired people from Bosch automotive. So, we have a guy on our team that actually did fusion, sensor fusion, and all kinds of really neat Advanced Driver Assistance System (ADAS) research for autonomous driving systems at Bosch, both in Germany and in Detroit.

He also worked for GM building autonomous self-driving car systems for them. We've got guys from ETH in Zurich, Switzerland. We have the former lead game developer from Northrop Grumman on our team. So, when I say we're building a powerhouse, all-star team - I really mean that.

And that's perhaps the most important part of oLabs. It's the people, right? Ultimately, that 14,000 square foot facility is awesome, but it's just a vessel for all those awesome people to innovate in. Getting back to the physical space. We just received about a million and a half dollars worth of AI compute.

So, we bought 15 petaflops of AI compute from Nvidia. I'm a former Nvidian, so I was happy to do that. So, we bought three DGX A100s (I'm sure we're all familiar with those), eight A100 GPU servers, and  a couple of petabytes of flash blade storage from pure storage as well.

I'm not trying to market for those guys. They just build awesome hardware primarily for training models. We don't want to rely on AWS, Azure. We use those services , but we want to have our own in-house capabilities. We're also building a dark room where we can do night vision, tactical AI stuff.

We work on the Integrated Visual Agumentation System (IVAS) program down at Fort Belvoir, with the Army, which is one of the Army's largest S&T programs in history. So, we want to be able to do night vision testing. We're building a really robust capability.

I guess I should hit on one more thing. We do have multiple COEs, Centers of Excellence within oLabs . The AI Center of Excellence is where the emphasis is right now in oLabs, but we will eventually branch out and incorporate all of our Centers of Excellence within that oLab space.

I think we have five, if I'm not mistaken, Agile, DevSecOps, Emerging Technologies, AI / Machine Learning, and then a Data Science Center of Excellence. So, I'll pause there. Otherwise, I'll take up the whole hour. I get excited. This is a, it's a really cool facility.

Ted Hallum: [00:06:01] So, the facility that you guys are building out, the hardware and compute power that you're bringing into that facility, the pedigree of the human resources that you've been able to bring onto the teams.

That sounds incredible. I'm curious. You talked about some of the things that the AI Center of Excellence is already doing with its current momentum, but I know that there are likely some things that you probably see that empowering the government to do that the government's not doing yet. Do you want to talk about any of that?

Rob Albritton: [00:06:27] So, oLabs is intended to be a place where we can partner. We're all familiar with Pivotal Labs and what they did with Kessel Run, and essentially opened their doors, right, to the Federal Government and said, "Hey, we want to partner with you."

We will show you how we do pair programming and how we do all of our industry best practice processes, if you will, for rapidly innovating and creating new technologies and getting code out the door . We want to do the same thing. So we're not copying, but we're taking our own flavor out of the Pivotal book, and we want to open our doors to the government and say, "Hey government, we know you don't all have 15 petaFLOPS of AI compute. We know you don't have these hardware resources." 

Oftentimes, while the government labs have some of the best talent on the planet, in my opinion, some of the best machine learning engineers, data scientists, data engineers are in the service labs, right?

ARL, AFRL, those kinds of places, but they're few and far between. We have a core cadre, 15 of some of the world's best talent in this lab space that the government can come to, sit onsite with us. We're not asking for money. This isn't about, "Hey, write us a contract and then come into the lab."

This is a, "Hey government, you have a intractable problem, or something really hard to solve. You have unique data. Bring it to us, and let's see if our talented engineers and scientists can help you solve it in our lab. So, that's the vision, right? The longterm vision. We want those kinds of partnerships.

So, hopefully that answered your question. I'll I'll pause there again.

Ted Hallum: [00:08:01] In the government space, specifically supporting the DOD, the intelligence community, Octo has a lot of competition, and some of it is from entrench companies - big names like Booz Allen Hamilton, SAIC, but Octo is different.

I know from talking to you, Octo's got a different atmosphere, a different culture, and I just wondered if you would expound on that so that people could get a grasp of the scope in which Octo is a different beast than some of these other players in the GovCon space.

Rob Albritton: [00:08:33] Yeah. I have to hold back from giggling a little bit because we are so different.

Those guys are, with all due respect, I think when you get to the size of some of these firms, the Boozs of the world, the bureaucracy creeps in, and we have done a really good job of ensuring that we continue to be a flat organization. So, we don't have a lot of hierarchy in our org structure.

It's not uncommon for Mehul Sanghani, our CEO, to sit in on technical discussions with guys and girls that are two weeks outta college, we've just hired. We're all equals which I think because unique, right? We don't think of each other as, "Oh, you're the Director, you're the CTO, you're the..." - We're all technologists. And so I think that helps a lot. We have peer relationships instead of supervisor relationships with our organization. As a leader in the organization, my team is they're my peers. So that's one piece of it.

I think it's also, it's just a mindset. It's hard to explain, but we are a bit counterculture if you will. So, when we see the big boys, the way they act, we automatically will do the opposite almost like a two year old. I have a two and a half year old, when you tell them to do something, they do the opposite.

That's us. We're almost a little bit like Cowboys. We like to go against the grain, and I think that gives us the guts, to try new things and to do things differently. Like in proposals to the government, we will submit really wild concepts that maybe go above and beyond what the government is actually asking for, and a lot of these bigger firms will just, you know, dot their I's, cross their T's, submit exactly what the government's asking for. We push the boundaries. We put our foot on the accelerator. That's just Octo. We work hard. We play hard. We take care of each other. Yeah, I think, especially in the oLabs group, we're more like a Silicon Valley or Austin or Boston startup than we are a big, slow behemoth government contractor.

Ted Hallum: [00:10:35] There's not very many companies that operate the way Silicon Valley does in Virginia or Washington DC supporting the Federal Government.

So, I think you've whet our appetite between telling us about the mission that Octo has, the incredible compute power that they're building, the expertise and the types of teams that are there, the culture. But. I invited you on this show for a reason, we have a very specific audience. We've got veterans and data science and machine learning.

So, I'm curious to know what is it about Octo that is particularly a value added proposition for veterans?

Rob Albritton: [00:11:10] Good question. We are very serious about a few things. One is veteran's wellbeing.

So , we don't only want them to come work at Octo. We want to take care of veterans writ large. We have many veterans at the senior levels of our company, especially in the CTO organization, which is where oLabs fits. So a lot of our technologists, like myself, I'm a Air Force veteran.

We have a couple of other guys that are Navy vets. a couple of Marine Corps vets. Anyways, we have veterans in our organization and we do a lot of outreach and we believe that veterans have a place in technology, quite honestly. So, we sponsor a lot of different veterans events outside of Octo not with the intent of recruiting them.

If they want to come work for Octo, great! That's an added value, but, yeah, we just really want to be involved in and take care of the veteran community because we've been there. We are veterans. We understand what it's like to leave that tribe, as you and I were calling it yesterday, that veteran tribe.

You need that, and I think technology and technology groups, technology companies, can be that tribe. Specifically within Octo, we have a lot of defense contracts. We have a lot of IC contracts in the U.S. Intelligence Community. We work with the Australian defense organizations.

We work with the Ministry of Defence (MOD) in the UK. Therefore, we need people that understand those missions, right, at Octo. Again, we are building one of the world's best machine learning and data science and emerging technologies groups within oLabs.

However if you put a bunch of what I'll say nerds - I'm one too - if you put a bunch of nerds in a room together, and tell them, "Build cool technologies!" That's what they're going to do . The guy that's really interested in reinforcement learning, he's going to build reinforcement learning prototypes and try reinforcement learning stuff. The guy that loves playing with convolutional neural networks. He's going to do that.

But, there's not going to be a lot of mission focus unless they've been in the military and actually spent time doing the mission. So, what we want to do at Octo is plug veterans right into oLabs. We want to hire folks that have actually been trigger pullin' operators down range or analysts behind a desk at an intelligence agency.

Because they've actually done the mission, and they can ensure that the prototypes we're building are mission focused. The last thing we want to do in oLabs and at Octo is build prototypes and applications that sit in a Conex container  and collect dust.

Winning contracts is what we have to do to pay our bills, but ultimately, for us, the success is when we build capability that makes its way into the operator's hand, makes its way down range, and impacts the war fighter, analysts, or decision makers.

Ted Hallum: [00:13:56] One of the things I like to say on this show go where you're celebrated, not where you're tolerated because there's a lot of places where you can take your skills. If you've gone and you've done the up-skilling, there are lots of places where you can go and get a job, but it's not necessarily the case that you're gonna be able to capitalize on your past momentum.

It doesn't mean that they're going to appreciate those previous work experiences that you had in the military, and I think you'd agree that domain experience matters.  Being able to have a hands-on knowledge of the problem, in this case the problems that military members face, that is directly applicable to the types of artificial intelligence solutions that you want to put together there in oLabs, right?  

Rob Albritton: [00:14:38] One of the things I think lacking, especially the U.S. Department of Defense and Intelligence Community - but really throughout the Federal Government - AI is used, various kinds of machine learning applications are used, but there's not a whole lot of really prominent success stories, quite honestly, of operationalized AI, if you will.

And I've worked at Maven, IVAS you name it with the Joint Artificial Intelligence Center (JAIC) all over the place. There are tiny little pockets, but not a whole lot of truly scaled, operational AI on the battlefield, and part of the reason is because there aren't enough domain experts helping build those solutions, right?

So, the solutions that are being built are oftentimes built in a vacuum, in a stovepipe, without that domain expert giving guidance on what would be useful on the battlefield.

Ted Hallum: [00:15:30] I think if you don't have get people that have domain experience, in this case, specifically military background, you can easily have people either providing answers to the wrong questions or providing solutions to problems that don't exist because it makes sure that whatever you're creating has a meaningful function for the ultimate end-user.

Rob Albritton: [00:15:53] Absolutely, and the data sets themselves are totally foreign. A defense data set oftentimes is totally foreign to somebody that's built machine learning applications or done data science in the private sector.

Maven, again, for example, when you're talking about Military aircraft, the video coming off of them, it's a lot different building computer vision applications for those assets than it is for finding cats in Facebook images, or whatever the flavor is of the day.

That stuff pretty easy. When you're talking about shaky, grainy video - really poor images - it makes things a lot harder. So, it's just a totally different domain, and to be able to find people that have experience in that domain and the technical acumen on the machine learning side of the house - yeah, that's a win-win. They're a unicorn though. There are a few of them out there, hopefully in this group.

Ted Hallum: [00:16:43] Before we move on to the next part of the show, I did want to just point out that Octo is so serious about hiring veterans that they have a whole program dedicated towards that effort.

If you're interested, I've got the URL up here on the screen. For anybody listening on Apple podcast, the URL is https://www.octoconsulting.com/careers/veterans . You can go there to find out about all the different career opportunities that are specifically geared towards you with your military background. Rob with that being said, as I mentioned before, you are a member of the Veterans in Data Science & Machine Learning community yourself. You're an Air Force veteran.

You've got a whole story that proceeds you up to this moment, being on The Data Canteen, and we'd love to hear about that. So, just in terms of where you're from, your interest growing up, whether you love STEM and math or didn't, your progression from college and military till now as being the Senior Director of the oLabs AI Center of Excellence, we'd love to hear that story.

Rob Albritton: [00:17:45] So was I interested in STEM? No, I was not growing up. So, I'm from North Carolina, military family, Fort Bragg. I've had ties to the Army and the Special Forces community my whole life. I was not really interested in STEM growing up. Know, not really focused on academics in high school, quite honestly, when I should have been. So I, I took a bit of a non-traditional route to this kind of this line of work, quite honestly.

You know, barely graduated high school, had terrible grades, didn't really get into any of the schools I applied to. So , I was forced to make a decision, and that decision was the Air Force. So, enlisted in the Air Force. I was a electronic warfare and electro environmental maintainer. So, maintainer on a reconnaissance aircraft at  Offutt Air Force Base - RC-135s. I was part of the 38th Reconnaissance Squadron. Got to do my deployments to Qatar during Operation Enduring Freedom over Afghanistan.

But realized the military was a little too structured for me. So, I did my four, got out, went back to school, got my degree in geography from University of Maryland, thinking that I was going to follow in my father's footsteps and actually go into the Intelligence Community. I did that for a little while. I left University of Maryland, went to NGA, National Geospatial Intelligence Agency, where I worked there as a Nigeria analyst for a couple of years.

Then, I got recruited by the Army Geospatial Center to work on special projects and SOF related stuff, Special Operations Forces. So, building geospatial enabled applications for them. And, I loved it, but quickly got fed up with the bureaucracy of government. Originally, I thought I would stay in that line of work. I went to the Defense Intelligence College. Got a degree in Strategic Intelligence with certificate in Denial and Deception, thinking I would go back into the intelligence Community, maybe even do clandestine work. But again, I got totally fed up with the bureaucracy of government before that could happen, decided to leave, go get an MBA, and use that as my ticket out to Silicon Valley.

That was my goal, which is what I did. I landed at Nvidia where I joined their public sector team very early, built that team out to a pretty good size, robust team, realized I missed the mission. So, kinda left the government, then went to Silicon Valley, realized that I missed the mission.

So, I joined MITRE where I was working on Project Maven and a couple of other really cool Pathfinder programs within the Department of Defense in AI. Let's see, how did I really get into this field though? I kinda, it was hard knocks, right? I had to learn. I was working on a project at the Army Geospatial Research Lab, which is part of the Army Geospatial Center, but I was working on a project where I needed to do some predictive analysis and do search space reduction. Try to reduce this the area that rescue personnel from the Air Force had the fly to look for somebody behind enemy lines.

To do that, we actually used machine learning. So, we ended up taking thousands of previous reports of pilots crashing and things like that behind enemy lines, other hostage situations, all kinds of stuff, text reports, and using that information to try to predict where somebody might go in the future.

So, we use that kind of information combined with terrain and all kinds of other, many other, factors that might impact where somebody might travel on foot to reduce that search space. So, that was really my intro to to machine learning, predictive analytics. We use some Bayes nets and things like that.

Then yeah, at Nvidia, learned really the business side of AI and just the scale of some of these companies. Nvidia spends $2 billion a year on IRAD, internal research and development. It was just eye-opening .

I had, had built machine learning applications for the Army and then going out to Silicon Valley, and Nvidia, and seeing how it's done out there. It was quite a bit different. But yeah, that led me back to MITRE, got recruited by these guys at Octo, pitched our leadership on this oLabs vision, and they've been nothing but supportive. Here we are now building what we think is GovCon's most robust AI innovation center, at least in the DC area, perhaps on the East coast, and maybe it'll be the most robust innovation center in the country.

Ted Hallum: [00:22:06] Wow. That's a fascinating path. I think it's, it should be reassuring to people in our audience that if you've got an indirect path, that's actually not weird. In fact, it may be more the norm than strange to have a windy path. And, I also think that means that you're the sort of person who's ambitious, you're taking initiative, you're taking control of your career, you're chasing new opportunities, and you're probably excited about these technologies - all of which makes you a great fit for this type of work. I think that should all be reassuring to people who are maybe getting into this and they think, "I don't know if this is a good fit for me or if I'm just crazy." I don't think they're crazy at all.

I think this is a good move.

Rob Albritton: [00:22:44] Yeah, I totally agree. I think there are some folks out there probably even listening to this thinking I need a degree from UVA, or Stanford, or wherever. I need a degree in data science. I needed a degree in computer science with an emphasis in machine learning.

That's not necessarily true. There are so many other mechanisms out there to gain a formal education, but you don't even need that anymore with the democratization of of data science and AI. Anybody can buy a cheap GPU now, spin it up, and learn how to...libraries are so easy. You don't even have to know how to write CUDA (Compute Unified Device Architecture) anymore . You don't have to get deep into the C or C#. There are so many easy ways to build applications now. But yeah, to your point, ultimately you gotta know when to walk through door when it opens. Sometimes , we're not all always good at that, but when it happens, we've got to walk through that door. For me it was, probably Nvidia. Just taking that opportunity and really learning about all the different AI opportunities out there, and the fact that there was a major lack of individuals in that Silicon Valley ecosystem that had both the military knowledge and the technical skillset. When I saw that, I said, "Oh my gosh, I can be successful in this field!"

Ted Hallum: [00:24:04] Absolutely! So I'd like to shift gears and focus a little bit on one aspect of your current role. Part of that is you have to be a hiring manager.

So you have to make sure that vacancies on your teams are filled. We have a lot of people in our audience who are in the process of upskilling, or who have finished upskilling, and they're trying to get their first data science or machine learning position. In, in the field of machine learning, we talk about how some types of models are black boxes, because you really don't understand how the model is working to generate its inference.

I know there are people in our audience that to some degree feel like the hiring process is like one of those. They don't quite understand what's going on in the background, what the hiring manager is actually looking for.

They know to some degree, they need to have the technical skills. They need to have domain experience, but they don't understand how's that weighted, is technical expertise way more important than domain experience? Or, am I undervaluing domain experience?

So, what I'd like to do, I thought about the ways that these different qualifications can be binned. And I believe that It works pretty well to bin them as technical expertise, soft skills, character / personality traits, domain experience, and then finally credentials.

For each of those categories, if you could just tell us, generally, how important it is to you - so like low, medium, or high importance in terms of your hiring decision or whether or not somebody is going to get an interview. And, also, just one example, even if it's a category that you don't place a lot of importance on, something critical to you that you are looking for each of those categories. 

Rob Albritton: [00:25:41] It's hard to give a steadfast rule, to these kinds of things because we oftentimes are hiring for different kinds of positions.

Maybe it's a position on a contract with a specific agency or customer that wants us to do research in a specific field. Maybe it's a reinforcement learning or something like that. Obviously, in that case, we would be looking for somebody with specific technical skills. Maybe domain experience, soft skills, personality traits, credentials, those kinds of things would still be important, but maybe less. But, what I'll try to do is give a more general answer for what I look for in most of our oLabs AI Center of Excellence hires.

So, going down that list, technical expertise - I think that one is, I would rate it as a medium simply because I think we can upskill and train people once they're on board. We don't want to have to start from scratch.

If you've never taken a single machine learning class, don't have the basic prerequisites in mathematics, or don't know how to write a single line of code - that makes it really difficult. But, you also don't have to be a seasoned expert, right? We're not always looking for somebody that has 15 years of specific expertise in a specific technical area.

Soft skills I would rate high. I am always looking for folks that have soft skills. Being able to just communicate, right? It's one thing to build a machine learning application, train a model, deploy the model, do your inference, but can you then explain why you built it a certain way? That operator or analyst that is using the model and the application that you just built, can you explain to them how it works,  why you built it a certain way, or how it impacts their their ability to do their job? If you can't do that, it's really hard for us to hire you quite frankly. So, we're always looking for those soft skills as well.

There are certain roles I will say. However, there are individuals that enjoy just hammering out code, right? Just building and working with data behind the scenes and they never want to see a customer.  That's fine, but I do think that makes it more difficult to, accelerate and advance your career.

So, I would rate soft skills very high.

Personality and character traits. That's another high one. You need integrity. Especially in this field, working with the government. Mission is critical to our company and our team. I would say we are almost obsessed with it. Every single thing we do, we consider the mission impact, right?

Every application we build, we consider the mission impact. Character traits, being ethical, doing the right thing for the military and making sure that you're driven to impact that mission. That is critical when we hire into our AI Center of Excellence.

Domain experience, again, it depends. But, I would say a medium as well. For certain roles, we do want some pretty deep domain experience. Let's say we're building machine learning applications for doing computer vision on tactical UAV unmanned aerial vehicle or unmanned aerial system video coming off of that platform.

We probably want somebody with some domain experience and those machine learning technical skills.

Credentials, I would actually rate as low. And I know that's probably counter to what a lot of us have been accustomed to in the government or the government contracting space. The government is I'll use the obsession word again. They're almost obsessed with credentials. They want AWS cloud practitioner and Kubernetes certified this and that. I don't look for that. I don't think credentials mean you're actually skilled in that domain.

I know for a fact that I have worked with machine learning engineers, with cloud practitioners, that are not credentialed yet they are far more skilled than individuals that have seven or eight credentials to their name. I think people get too obsessed with credentialing and adding it to their LinkedIn, their resume, that doesn't work with me. When I'm recruiting, that's not the kind of thing I'm really looking for.

So hopefully that hit on most of those Ted. Again, it's hard to say a steadfast rule because everyone's so different. Every single hire as an individual, and that's how I think about hiring for machine learning, AI, data science positions in the AI Center of Excellence.

Ted Hallum: [00:30:01] I think that was a fantastic answer. There's so many factors that go into it. But I think you gave our listeners some really good insight and probably even shocked them with some aspects of how you felt about soft skills, the importance of being able to communicate.

And also I think a lot of people think that they can't get a position in this space if they don't possess certain specific credentials and you've let them know, at least in your case, that's not necessarily a hard prerequisite to being eligible for a position.

Now you did mention integrity, you mentioned your prior work with the intelligence community, and, of course, as you introduced Octo they support a lot of government clients, some of whom conduct business in a cleared space. So, when we look at our audience here on this show, some of the veterans came out of the military with clearances, they've maintained those. Another portion of our audience didn't have a clearance and to this day they're seeking out positions that don't require a clearance.

What's the situation with that at Octo? Do you need a clearance to work at Octo? Do you not need a clearance? How does that work?

Rob Albritton: [00:31:02] Do you need a clearance? No, you don't need a clearance. I think it's beneficial, simply because we focus so heavily, especially in the AI Center of Excellence, we're focused so heavily in the DOD and IC space.

It's very helpful to have a clearance, but you don't have to have one. And speaking for Octo, big Octo, it's a definite "no", you don't have to. Not everybody at Octo has a clearance. We have contracts with GSA, HHS, DHS, a lot of different FEDCIV agencies, federal civilian agencies, that don't necessarily require a full-on TS, or SCI ,or something like that.

That being said, it's definitely beneficial. Again, we talk about unicorns, finding people with domain experience, the technical skills, and clearances...That's pretty hard. But, this group is probably where those folks are. If you're listening, reach out to me.

Ted Hallum: [00:31:58] And we'll let you know the best ways to do that at the end of the show. So , we've talked about how the different qualifications that people bring to the table are weighted, at least in your mind, but that puts the cart a little bit in front of the horse because before people can talk with you about their qualifications, they first have to get an interview, go through the phone screen, probably talk to a recruiter. There's so many different ways that people can frame their skills. They could do it with a resume. They could do it with their LinkedIn profile, GitHub profiles to show off their actual work or technical chops.

What in your mind is the single most important approach for how someone can frame their skillset? Just to get the first phone screen and  whet Octo is at appetite to what they bring to the table.

Rob Albritton: [00:32:44] If you're reaching out to me on LinkedIn and you're looking for a position, or even if we reach out to you and the communications have started, the way to keep that going and to make me more serious about potentially hiring an individual is when you talk about GitHub, for example. I love it when somebody sends me their repo and shows me, show me what you've actually built and what you're capable of. Show me some creative projects, not just your ability to code.

I think that's awesome, and I want to see that you actually do have the skills that you say you do because machine learning, even more so than data science, has become just completely filled with buzzwords that people are latching onto and they figured it out.

I throw some of these buzzwords in and the filters pick it up, I get an interview, my chances are better of getting a job.  People have figured it out. I have conducted so many interviews in the last year and a half where, if I just read the resume, it truly appears as though they are cranking out machine learning applications, training models, deploying them, operationalizing them every single day. Then, we dig a little deeper after a couple of phone calls and really the person's just managing a contractor or they're managing a project. They're not actually doing it. But, I wouldn't have known that without talking to them because the buzzwords were in there, the resume was awesome. I've learned though, I've caught on now. Really, we weed them out by, like I said, if you send me repos on GItHub or send me other examples, that goes a long way to me feeling okay this person is legit.

They really do have the skills they say they do. Let's move this along and have some serious conversations. Does that answer your question, Ted?

Ted Hallum: [00:34:29] That's exactly what I was looking for. I've said before on this show that just us, if we were going to hire a roofer, we would go out and we would call several roofers.

We would ask for references. We'd want somebody, who's got, 50, a hundred, 200 successful jobs under their belt with people that are willing to vouch for their good work. And we shouldn't expect AI and machine learning, hiring managers to be any different. Nobody wants to hire somebody who says "I just graduated from school and I'm full of potential - you can trust me!" That's not going to cut it. If you don't have actual work experience, then do projects on your own, get them in your GitHub, and that's another way to show off your ability to actually do this work is to just find a data set you're interested in and then do something cool with it.

But that's still showcasing that you can act that you actually possess the skills and that you're not just one of these charlatans, like you talked about.

Rob Albritton: [00:35:20] Personally, I don't care how you learn those skills, just show me that you can do it. And that gets back to the credentialing and, academic background and things like that.

At least for our team at Octo in oLabs, I quite frankly don't care that you have a degree from Stanford because if I see your project example compared to someone who taught themselves in high school how to work with data and their application is just as good. I don't care. The degree doesn't mean as much at that point. I'm not saying don't go get a fancy degree. I'm just saying it's not the end all be all. It's more of a "what have you done for me lately" industry, I believe. And, Missouri, show me what you can do.

Ted Hallum: [00:36:02] You've set up perfectly for my next question. From your experience, bringing in people to your teams for various types of jobs, various backgrounds , and they've probably all approached their upskilling a little bit differently. If you could recommend a single path to someone who hasn't started upskilling yet, but wants to, what would that path be? What path presents the best value in terms of the amount of money that they have to spend and the capability that they'll ultimately have at the end?

Rob Albritton: [00:36:32] I think if, I understand the question, you've mentioned these boot camps. Some of them are a couple months long. In general, that is a good way, in my opinion. They're so intense, right? Some of these bootcamps are so intense that all you're doing is learning deep technical skills in these domains - machine learning, data science - for six or eight weeks at a time. Quite honestly, those graduates, we have a couple of them on our team, have proven to be more technically skilled than some of the folks with degrees because they spend four years learning other stuff too.

Does that make sense? They're so specialized and deep into one area for those six or eight weeks, or however long it is. So I would say that's a really good way. Also, some folks might discount them, but I think, just the simple Cloudera, and even like Nvidia's Deep Learning Institute, and some of these different online platforms today.

They're actually pretty, pretty darn good. You'll learn a lot. And, honestly they're usually taught by people who have actually practiced machine learning in industry. They've actually done it. So they're taught differently than purely academic classes are taught.

I'll give you an example. Even at the elite programs, right? Stanford, Berkeley, University of Toronto, NYU, they are failing to teach students how to actually operationalize AI and scale it for real-world use cases. Specifically, what I'm talking about is MLOps, right? Machine Learning Operations. Unfortunately, they're just not teaching it. They teach machine learning and data science in such a academic way that it's stovepiped. They teach a project.

They don't teach how that project, that capability, could actually be scaled and used in a real world scenario, if that makes sense. So, I think some of these other programs that are not at universities, right? The boot camps, they do a better job of that because they're taught by people that have actually run machine learning teams and engineering teams at large corporations and things like that, where they actually have to implement an MLOps pipeline, a true MLOps pipeline, where you're training your models, you're deploying your models, you're maintaining them, you're monitoring their health, you're feeding new data back in, retraining it again, tweaking it. That whole cycle. That's not taught in school, unfortunately, by and large today.

Ted Hallum: [00:38:58] I didn't know how you were going to answer that question when I asked it, but I actually, I loved your answer because I feel like that there are some fantastic non-traditional education options that are available now through the bootcamps, like you mentioned, through MOOCs, I've seen people Come out, very prepared having gone through those types of routes.

And I've done some of those types of courses myself, and just been shocked at the educational value that's there. In the domain of data science and machine learning, I think there's a little bit more respect for those non-traditional education methods. I think our broader society and culture has some catching up to do, but that's not going to happen until people hear podcasts like this, and they hear the merits of those different avenues.

My next question was going to be riffing off the idea of a killer app. Would you say that there's any particular killer skill, but you talked about how a lot of times people don't get taught the skill of taking a model and putting it into production in a way that's realistic, that scales, and the sustainable. And you said that's called MLOps. Would you say that would be the killer skill people should try to get?

Rob Albritton: [00:40:10] A hundred...if I can go over a hundred percent? I would. Yes. Yeah, there are, yes. Being able to put your applications, your models, into production is key. Also, just the little things, right? Code repos and tracking tickets in JIRA and all these little things that we don't love to do.

Oftentimes, especially guys like me who are totally, I am not a detail oriented person. That's probably why I didn't stick around the Air Force too long, but, I like to wing it, but you can't necessarily do that all the time. Especially to put your applications into production, you've got to be really buttoned up and sound with code versioning and things like that.

But, yes, in general, I think that MLOps piece and understanding active learning, continual learning, some of these concepts that really are all about maintaining models in the wild, right? As we know, as soon as your machine learning model gets into the wild, it starts degrading, right? Sometimes pretty rapidly, especially in military scenarios on the battlefield, for example, or on an intelligence analyst's system, they started degrading. The government especially has had a really hard time understanding that machine learning is not a widget.

And we actually, especially in the United States, in higher education, we still teach it like it's a widget. And a lot of the folks I interview and hire, they think of AI and machine learning, especially, as a widget. I'll build a model, I'll deploy it.

That's not the end. It's a, you gotta treat it like a living organism and continue to maintain that model - retrain it, tweak it. That is absolutely a skill that for some reason has been lacking. I think the academic institutions just have been slow to really teach it. They've been focused more on theoretical fundamentals, which are important, but don't really help you productionize or productize a machine learning application.

Ted Hallum: [00:42:12] So, you mentioned how critical it is for MLOps and being able to take a model and put it into production. You said a hundred percent, if there's one skill that just makes somebody super attractive to you during the interview process, it would be if they have experience with that, which makes sense, laws of supply and demand. MLOps is relatively new, probably within the last two years I'd say it really came onto the scene, so there's not that many people out there who have had a chance to get experience with it. But, our listenership is filled with people who would love to go out and get the skills that you recommend so they'd be more competitive for the jobs that are available. What would be the one way that you would recommend if somebody wants to fill that gap in their skillset, and they want to get spun up on MLOps, what's the best way for them to obtain that skill and be job ready.

Rob Albritton: [00:42:57] So, one way is just playing with open -source MLOps tools. I say playing, but try projects on your own, right? Or, try projects in your current job...or your current academic setting, using tools like Kubeflow, and MLFlow, and some of these open-source tools. That's one way. I think another, I can't emphasize how critical internships are, quite honestly, and I know we've talked about Etsy and their MLOps pipeline that they've recently instantiated. There are companies popping up all over the place now that are catching on and realizing that a robust MLOps platform is required to truly create production grade machine learning applications.

Intern at those companies, that's one way. And, they're available. When I was active duty there's no way I thought I would have been good enough, quite honestly, to intern at LinkedIn or Google or Facebook or any of these places that undoubtedly are leading the charge in some of these newer concepts for putting ML into production. I say go after it, try to get internships there. That's one way to learn. Otherwise, yeah, it's just be curious . Because the academics haven't really caught up, in my opinion, to where we need them to be, to give individuals the skills that we need - and other hiring managers need in industry - I think you have to be curious and take initiative, which veterans are some of the best at, right? Take initiative, stay busy, and just dabble, learn. That's probably not the best explanation. I know you're probably would love if you do X and Y, Z is going to happen. But, unfortunately, those kind of mechanisms and that ecosystem really isn't there just yet for machine learning. So, a lot of it falls onto the individual to stay curious and learn about these new tools on their own.

Ted Hallum: [00:44:55] I think that gives our listeners some great options. If they do want to get up to speed on MLOps. I did want to ask, you mentioned internships, are there opportunities that you're aware of where our listeners might be able to do an internship at Octo is that a thing?

Rob Albritton: [00:45:08] It is a thing. Absolutely. We tend to fill up pretty quickly actually, because we're a company run by a lot of Virginia Tech alums. So, we have a pretty robust pipeline of engineers coming out of Virginia Tech. But, that's not the only place we recruit from. We may not do the best job ever at socializing our internship program, but the answer is, yes, we do have one.

We would love to have more Veterans in Data Science and Machine Learning that want to join us and spend a few months with us over the summer. Or, hey, maybe you're a veteran you've recently come off active duty, separated or retired, and you've gone into a data science program, and you're going to have a break over the summer. Yeah, absolutely, reach out to me and let's talk about spending time with Octo working on  real customer problems . That's one thing I'm very passionate about. There's nothing worse. I had an internship with a certain government agency, and for three months I literally digitized farm land . So, basically, drew  lines on imagery, mapping farm land, and it was miserable. It was the worst internship. I learned nothing, except that I didn't want to go work for that agency.

That's not what we want to do at Octo. We are all about giving you real customer missions to work on for that time that you're with us. And, by the way, a lot of our interns come back and work with us full-time.

Ted Hallum: [00:46:30] So, you heard it there, if you're looking for an internship opportunity, or you need an internship opportunity, and you've been interested in what you've heard about Octo so far, the Senior Director for Artificial Intelligence within the oLabs at Octo has said, "If you need an internship opportunity, come to me." I can't imagine a better open door than that. All right, Rob, at this point in the conversation. I want to shift and give you an opportunity to wax philosophical. So, I feel like you're in a position between what you've done with Nvidia, your experience at MITRE, and now what you're building at Octo know you have a unique perspective. What do you think the implications are for how we ultimately do, or don't, implement AI in the national security arena.

Rob Albritton: [00:47:19] I'll give you one of my pet peeves about what we are doing in the national security arena, especially the U.S., but really it's the Western world. We are obsessed with the ethics and policies of AI. Let me step back and preface this with, I do believe ethics are important. I don't want the viewers to think I believe in implementing unethical AI. That's not the case, but I think we focus too much on it.

And, we're focused too much on whether an algorithm or a model is biased or not instead of building actual capability. I can't tell you how many senior leader calls I've been on with government agencies where they're building communities of practice and they're building these five ethical principle guidance.

And, they spend years on this stuff - but zero capability. They built zero capability. You have ethics for what? You don't have an application to be ethical about. Whereas you look at our competitors, the Chinese, the Russians, the Iranians, the North Koreans you name it, they're building capability.

And then, they'll worry about the ethics later, right? I'm not saying we should do the exact same thing because we are better. But, we need to start taking AI seriously and building capability, real capability, start prototyping. Let's get prototypes out onto the battlefield.

Let's get more prototypes onto the analyst desk.

Ted Hallum: [00:48:39] It sounds to me like you're sounding the warning to not get caught up in a close cousin of analysis paralysis.

Rob Albritton: [00:48:46] Absolutely, very close cousin. It's happening. It is absolutely happening. There's a reason that there are so few operational successes that we can point to in the U.S. Government and say, "This is exactly where AI is making things better. This is where it's used every day. Or, this is a specific operation where aI was implemented, and, without it, we couldn't have been successful in that operation."  It's because of exactly what you said, paralysis, we're paralyzing ourselves by over thinking it quite honestly. Thinking about if we do that, what are the ethical implications? And worrying about explainability too.

Ultimately, AI has to be explainable for a mass adoption, and for everyone to feel comfortable with it, but I think it'll come. And, I think we need to worry about that less right now quite honestly. That's probably not a mainstream opinion to have. But I'll give you an example of why I think that way.

You've been an analyst Ted, if you think about how analysts make decisions in the intelligence community, oftentimes it's a guy or girl that's been doing it for 30 years. They've been a Russia analyst for 30 years. They write a paper. They make a decision.   Quite frankly, some of these decisions make their way into the presidential daily briefing.

They make their way into some very important decision makers onto their desks. Decisions are made based on these, and when asked why they made a certain decision it's because of the gut-feel, right? It's because I've been doing it for 30 years. I have no other way to explain it, other than I have 30 years of experience doing this.

Why do we trust that? Because it comes from a human, but we don't trust the decisions coming from a machine learning model that we know is just as accurate, if not more than humans. So, I can't wrap my brain around why we have to explain. Why can't we just trust at times that these machine learning models actually are pretty good at what we've trained them to be good at?

Ted Hallum: [00:50:47] So, I think it probably will come down to people becoming more familiar with the technology. I think for a certain percentage of the population, this technology is probably still foreign.

Rob Albritton: [00:50:57] I think that is it absolutely. Yeah. It's it's unfamiliar, right? But, I think it'll get better over time. Folks will be less inclined to think that machine learning, AI, is going to take their job.

We need them to understand that it augments you. It makes you better at your job, not steals your job. And , if you take the appropriate precautions, put your models through the appropriate TNE test and evaluation processes and red teaming processes, you can trust them.

So, we've just got to keep pounding that into folks, head in the community.

Ted Hallum: [00:51:29] You mentioned how some of the things that we the focus on obsessively could put us at a disadvantage, potentially, down the road with other countries that aren't as obsessive about those aspects of implementing this technology.

That brings to mind the question of today, in the present, how competitive do you think we are with near peer adversaries in terms of the AI race?

Rob Albritton: [00:51:56] I'll tell you. I will not put myself in the camp with a lot of senior leaders in  our community that we hear saying, "The Chinese are 10 years ahead of us, and the Chinese dumped this much money into AI research." I would say, "Prove it to me." Prove it because I haven't seen the evidence of that. So, today, I do not think they're ahead of us. As somebody that's been in this industry for 15 years now, I don't believe they're ahead of us. Rapidly gaining on us? Yes, absolutely. Stealing from us. Yes, absolutely - very rapidly. To answer your question, I think we are still ahead. I think the United States, and really the Western world writ large,  we're still ahead of of China, Russia, some of our near peers.

I think the biggest threat though, to us, is continuing to paralyze ourselves with those questions of ethics and taking our foot off the accelerator as far as doing bleeding edge research and then being willing to actually apply that research and put these capabilities on the battlefield rapidly. But even more I think it's just the manner in which our near peers are willing to use AI. This really comes down to a cultural issue. As crazy as it sounds when we talk about robot warriors, I'm not completely in the Elon Musk camp. I don't think the SkyNet is right around the corner. I don't think AI, or any anywhere near strong AI, we're anywhere near it. I don't believe that. However, I do think we have adversaries that are working towards that and willing to use AI in some very devious and really nasty ways. For example, I would not be surprised to see fully autonomous weapon systems, squads of basically robotic infantry coming out of the Russian version of DARPA, for example, in the next few years. That wouldn't surprise me.

Unfortunately, or fortunately, however you think about it - I know just depends on your outlook - I don't think we really have the stomach right now, it's not within our DNA in the Western world, to build those kinds of systems right now and field them. We don't want to take the risk of accidentally hitting a school, church, or a hospital. That would be devastating. So, how do we defeat those systems?

The answer is counter- AI. To stay within our cultural norms, bounds, and what we truly believe in ethically, as Americans, we may not use those fully autonomous systems to go toe to toe with the fully autonomous infantry squads, for example, coming out of Russia, but we've gotta be able to combat them somehow.

I think the answer is counter- AI and being able to jam those systems, understand how they work, and defeat them some way other than kinetic on kinetic operations.

Ted Hallum: [00:54:42] So my next question was actually going to be what scenarios keep you up at night, but I think you just described it. Fully autonomous infantry squads from Russia would definitely keep me awake at night, but it's comforting to all of us to know that thought leaders like yourself are thinking about counter- AI that will adequately address situations like that may potentially arise. So, to wrap- up on a lighter note, I was just going to get your top recommendations for data-related podcast and/or books.

Rob Albritton: [00:55:13] My recommendations are probably the same as some of your other guests have been, but I love Lex Friedman listen to him on the treadmill all the time.

Ted Hallum: [00:55:21] Surprisingly, no one has mentioned him yet. So, that makes me very happy. I've been waiting on a guest to mention him. He's one of my favorites.

Rob Albritton: [00:55:28] Oh, wow. Yeah, I'm a huge fan. Some of the guests are truly visionaries, right? I don't know about you, but me, as a nerd, I look up to guys like Elon Musk.

I think they're it's almost hard to even describe, they truly are changing humanity with their innovation and just having the guts to try some of these things, right? Reusable rockets. It's just so impressive to me, but, Lex, he's well known for getting these kinds of folks on his podcast and he challenges them. He asks really tough questions. So, anyways, that's why I love his podcast.

 Then, the other one I would say is probably, as a former Nvidian, I've got a plug the Nvidia AI podcast. I love it. I think they get pretty good guests as well. Granted they do theirs for a little bit different reason. They're probably marketing, trying to generate revenue, but that's okay.

I think they talk about some really neat stuff. If you haven't watched the GPU technology conference, GTC, intro videos and stuff like that, Nvidia is really good at eliciting emotion. I know I get fired up and after I watched those, I'm like, "Wow! AI can do anything. We're going to change the world. We're going to continue to make the world better through AI."

Ted Hallum: [00:56:36] Rob, those are two killer recommendations. I really appreciate it. I know the people that tune into this show love to tune into other podcasts as well. So, you just gave them two additional fantastic options.

I want to make sure people know how to contact you because, after this episode, there's a whole laundry list of reasons why people might want to reach out to you - from they're interested in a position to possibly do an internship. We've got here on the screen Rob's username for LinkedIn.

If you're listening on Apple podcast, it's robalbritton. In addition, if you have access to our Slack workspace, we have a special channel where you'll be able to speak with any of our former community members who have come on the podcast and as a Veterans and Data Science and Machine Learning community member, Rob will be there.

So, if something he said has inspired you and you want to tell him that, or if it's created another question in your mind that we haven't addressed, hit him up in our Slack channel. Thank you so much for coming on the show, Rob. I really appreciate it!

Rob Albritton: [00:57:31] Thanks Ted. Thank you for having me!

Anytime!

Ted Hallum: [00:57:36] Thank you for joining in on this conversation with Rob Albritton. As always, until the next episode, I bid you clean data, low p-values and Godspeed on your data journey.