U.S. flag

An official website of the United States government

Government Website

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Safely connect using HTTPS

Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Breadcrumb

  1. Science and Technology
  2. News & Events
  3. Technologically Speaking Podcast
  4. Episode 8: Doing AI Before It Was Cool

Doing AI Before It Was Cool

Image
Dr. Amy Henninger

S&T Senior Advisor for Advanced Computing Dr. Amy Henninger and host Brittany Greco get us up to speed on the Directorate’s work to address adversarial artificial intelligence (AI), a new (and potent) threat that is part of the emerging AI revolution in computing. She discusses how it can appear similar to cyberattacks yet is fundamentally different. She also offers her definition of AI—one that you may not have heard before. If you’ve wondered about deep fakes, phishing scams on steroids, and the broader social implications of not knowing what you can trust, give this episode a listen.

View Original "Doing AI Before It Was Cool" audio
 
Run time: 34:52
Release date: October 8, 2024

Show Notes

Guest: Amy Henninger, Ph.D., Senior Advisor for Advanced Computing

Host: Brittany Greco, Senior Communications Specialist, Science and Technology Directorate, Department of Homeland Security

[00:00:00] Amy Henninger: AI advances all the time and it just becomes part of our societal baseline. That's why I call it advanced software, because right now when we use ChatGPT we think about as AI, but in five years, it's just going to be chat GPT.

Dave: This is Technologically Speaking, the official podcast for the Department of Homeland Security, Science & Technology Directorate, or S&T, as we call it. Join us as we meet the science and technology experts on the front lines, keeping America safe.

[00:00:16] Brittany: Hello everyone and welcome to Technologically Speaking. I'm one of your hosts, Brittany Greco, and I'm so thankful that we are joined today by Dr. Amy Henninger, Senior Advisor for Advanced Computing at DHS S&T. Amy, welcome to the podcast.

[00:00:44] Amy Henninger: Hey Brittany, it's great to be with you. Thank you so much.

[00:00:48] Brittany: Just to get things started, can you help us understand the difference between adversarial AI and other forms of AI?

[00:00:55] Amy Henninger: Sure, absolutely Brittany, thank you for that question. You can think of adversarial AI as the nefarious use of AI. This is the use of AI by different threats, whether they be nation states or transnational criminals, or even hackers. Any threat who wants to use AI in a way that is counterproductive to our Homeland Security missions, that would be considered an example of adversarial AI.

[00:01:28] Brittany: And then how is that different from a cyber-attack, or is it just like a different kind of cyber-attack?

[00:01:34] Amy Henninger: Yeah, so they are a little different. A cyber-attack can be achieved against software that's not AI, a cyber-attack can also be achieved against AI based software. Adversarial AI doesn't have to be achieved against AI. We can think of an adversarial attack, like a morphing GaN. What is a morphing GaN? So a morphing GaN is when you take two genuine pictures - so let's say, we have a picture of Brittany and a picture of Amy, okay? And we want to morph those two pictures together in such a way that we have a new fake image, right? And that fake image now looks a little bit like Brittany and a little bit like Amy. And let's say you put that on your driver's license - let's say you put that on your passport, whatever, but somehow it's in a database. All of a sudden that single ID can be used both for you and for me. 

So, if you think of that a kind of attack, I can present this passport and it doesn't look exactly like me, but it looks enough like me that the officer says, okay, that's fine. But he thinks I'm Brittany, for example, because it looks like you too. In the database it says it's you, but the officer, he says, “Yeah, it looks enough like her.” So, you can take that same thing and translate that to an automated facial recognition system. 

This kind of attack is AI against people, like the TSA officer or the Border Patrol agent, but it could also be AI against a system. Let's say we have this automated facial recognition system and they're comparing the passport photo with my picture at the TSA gate and it says, “Yes, it's Amy.” But, oh by the way, it could match Brittany. The difference between adversarial AI is that it could be used against people, it could be used against non-AI systems like software that doesn't have AI in it - cyber-attacks are only used against software. Cyber-attacks can have ramifications on people, but they're only achieved through software, and adversarial AI attacks can be achieved by software, with people, or with AI, or with software.

[00:04:01] Brittany: I think that distinction is important because it does seem like adversarial AI potentially has huge repercussions, real world, digital world…to your point, it's not just affecting a digital system, but it can be used to fool people who we usually use as experts in their field who will be able to check this. But it's something that can definitely undermine those human based security systems that we have.

[00:04:25] Amy Henninger: That's true, and that's why it's so important for DHS, right? Because we have right now, we have a lot of human based security systems. It's an immediate threat to us, either for our AI systems that are fielded, but also for systems that are not AI-based that can be attacked by AI.

[00:04:45] Brittany: Why do you think it's important to define these terms, and to have shared understanding of what we mean specifically when we reference adversarial machine learning versus adversarial AI versus cyber-attacks? 

[00:04:57] Amy Henninger: A lot of times word definitions get us into trouble because we don't give them enough precision. You want to be sure, as you're starting this kind of work, that you're being comprehensive in your thinking. It's very easy, for example, to say, “We've done something like this before, it was called cyber security, so I'm just going to assume everything I did in cyber security, start from there, and tailor it to adversarial AI.” And the problem with that is, if you don't go back to first principles and think hard about what adversarial AI means to your organization, you're going to pull in assumptions that might not be true. You’re just going to start with, this is where we were 20 years ago. 

So that's one of the reasons. The other reason is that we have a very robust cybersecurity R& D program at S&T. It's led by my good chum, DC, Donald Coulter. You've interviewed him, I think, maybe in the past, or somebody has. He's a great guy, and he has a very nice cybersecurity portfolio. We wanted to make sure that we understood the distinction between cybersecurity and adversarial AI so that we stay coordinated, so that we don't waste resources, so we could understand interfaces and synergies that we should have, ways of leveraging each other's work, yet maintain, full, independent portfolios. I think lastly it's important when you're communicating with other people, so they understand what you're talking about, right? Because there's this thing called AI security and some people liken that to counter AI. Some people liken it to adversarial machine learning. Some people liken it to adversarial AI, but we all have to understand what those terms mean to us. So when we talk with each other we understand that we're comparing apples to apples.

[00:06:55] Brittany: That makes sense.

[00:06:56] Amy Henninger: And not apples to fruitcake that sometimes has apples in it.

[00:07:00] Brittany: Oh man, see, now I gotta wonder about fruitcake recipes. I'm embarrassed to say we didn't define what AI means at the beginning of this episode.

[00:07:02] Amy Henninger: No, that's a really great question. That's really a huge issue for the AI community. There is not a common definition that is universally accepted. The definition that I like always gets me in trouble with mathematicians and statisticians. I call AI advanced software.

[00:07:27] Brittany: Okay.

[00:07:28] Amy Henninger: And here's why. AI, we all know that AI is software, right? It's embodied in software and smart people will say, “Yeah, but there's so much math in it, in the machine learning, right?” And they're exactly right, no qualms about that whatsoever. But AI is bigger than just machine learning, Machine learning and deep learning, that's just the thing we're doing now, but AI is bigger than that. AI's been around since before you and I were born. Since before I was born, for sure. And things that we take for granted today once started out as AI. So, at one point there was something called Deep Blue, right? 

Brittany: Yes.

[00:08:07] Amy Henninger: You've heard of this, right? Very good. IBM, chess program, AI based chess program on a high-performance computer, supercomputer, at the time, that actually beat Garry Kasparov, the chess grand master champion of the world. And there's some idiosyncrasies in that match, but yeah, it beat him. It turns out that was like over 20 years ago, like maybe 25ish, 30, maybe, I don't know about that, but it turns out that you could have a Deep Blue-like program on your iPhone now as an app. We don't think of that as AI. We just think of that as a chess game. 

There was a time when optical character recognition, OCR, the way computers read numbers or letters and let you transcribe into typing, for example. There was a time when that was pretty intense AI, and image processing in AI, but then it just, then we solved it, right? So, we solved it, and it just becomes something we use now. AI advances all the time, and it just becomes part of our societal baseline. That's why I call it advanced software, because, right now when we use ChatGPT we think about as AI, but in five years, it's just going to be chat GPT. AI is the threshold, and it's constantly advancing as we solve problems. So that's why I just say it's advanced software and people who are not software people hate that because they want it to be special, that you have to be special to be able to do AI, but to me, it's just advanced software.

[00:09:51] Brittany: So you mentioned morphing as one of the types of adversarial AI threats. Are there any others that folks might be familiar with or that might be really emerging that they should know about?

[00:10:00] Amy Henninger: Sure, and that's a great one. One of them that people are really starting to hear a lot about casually would be deep fakes.

[00:10:08] Brittany: Oh yeah.

[00:10:09] Amy Henninger: Yeah, exactly, and some people would call morphing a type of a deep fake. So again, this is sort of a language and taxonomy issue. Deep stands for deep learning and fake obviously stands for fake, or fake media, and they put those two words together. 

[00:10:23] Typically, what you think of when you hear of a deepfake is you're going to put Britney's face on a picture of Amy, right? So all of a sudden, you have Amy with Britney's face, so that might be one. Or, you could have like the morphing one I talked about before. You could do a deep fake of a voice.

[00:10:43] This is a really interesting example. In a company in the UK had a financial analyst, I think in Hong Kong, and for that same company. First of all there was an email and the chief financial officer, or the chief operating officer, said, “I need you to cut a check for like 25 million.” This is to the guy in Hong Kong, and the guy in Hong Kong says, “That doesn't seem quite right because that number is really out of the ordinary for what I would typically hear.”  He called him on it, he thought this was a phishing email or something that was not legitimate. So, the attacker upped the game because it was an attack. The financial analyst in Hong Kong made the right call, but the attacker upped the game and he said, “Let's have a zoom call and talk it over.” 

The financial analyst says “Okay, in case it's legitimate, I'll hear it and see it for myself. So, the attacker arranged for the Zoom call, and on the Zoom call was the chief financial officer, and a couple of the other C suite officers were on this call also with the financial analyst. They were explaining to the financial analyst the importance of cutting this check and why he needed to do it, and they needed it right away. The financial analyst saw three or four people from his leadership team telling him to do something, so he did it. And it turned out that Zoom call was completely fake. Every single one of them, not just the guy that did the initial email, even the supporting characters, they were all deep fakes, basically telling this financial analyst to cut the check.

[00:12:30] And he did, and then the company's out 25 million. Obviously, that's a threat right now, and there's ways you can detect this, right? A lot of times you want to look for the eyes, if they're not blinking at all, or if they're blinking too much. You want to look at how their mouth is moving, and their facial expressions. A lot of times you can ask the person, if you want to verify it's not a deep fake, you can ask them to turn their head 45 degrees. Sometimes it kind of screws up the algorithm and it'll get like kind of blurry or choppy or something. There are ways of detecting those, but the deep fakes are getting better and better all the time. And, of course, there's automated mechanisms to detect them too, but that's turning into a cat and mouse game, just like cybersecurity did, right? 

Another attack that is pretty important would be an adversarial perturbation attack. Let's say you have a surveillance system, where you're identifying people on the street, for example. There are ways of dressing or putting on makeup, or putting on glasses, such that, the AI algorithm will break. It doesn't classify like it would have otherwise. Another really pretty famous example would be stop signs. There's been a lot of research about autonomous vehicles. Autonomous vehicles will come up to a stop sign, they recognize that sign is a stop sign, they know what to do. There are ways of putting tape or patches or sticky notes, what have you, on a stop sign in a way that might look random to a person, but that would trick the autonomous vehicle into thinking that's not really a stop sign, it's a sign that says speed up to 45 miles per hour. That's the key about the evasion attacks; they're tricky in the sense that a person wouldn't naturally notice it, but it's enough to trick the algorithm into coming to a wrong conclusion.

[00:14:35] Brittany: I think what's so scary about these types of attacks is that you can't trust your senses. You cannot trust what and you can't trust this technology that we've all, at this point, come to rely on for a lot of our everyday lives.

[00:14:47] Amy Henninger: Yes.

[00:14:48] Brittany: It's just scary.

[00:14:49] Amy Henninger: No, you're right. And as you read the literature, especially in the deepfake space, that is a common concern across a lot of the researchers. They'll say that as long as content has been around, there has been fake content, right? And as long as digital content has been around, there has been fake digital content. In history, and in computing history, image processing algorithms and stuff were just available to niche markets, and then Photoshop came out, and people still do stuff but now it's really elegant algorithms processing and developing these deepfakes, and those are available in TikTok, right? Those are available at GitHub. and anybody can do it. It's becoming very democratized people expect that to like flood the market with deepfakes, and they say that society has always adjusted to fake content, right?

[00:15:51] Brittany: I would say yes and no.

[00:15:52] Amy Henninger: It's fair, I think that's fair. I think some people are concerned, not just with the fact that we're going to be faked, but that we're going to become distrusting of everything, right? So that's to some researchers, that's the bigger concern.

[00:16:09] Brittany: Yeah, that's a fair point. It’s taking us to the natural conclusion of, if you can't trust what then, gosh, how do you get that trust back?

[00:16:15] Amy Henninger: So, the old saying, a picture is worth a thousand words, that's like flying out the window.

[00:16:20] Brittany: So what is S&T's role in this space? Understanding that AI systems are sometimes owned by private companies. There's a lot of this stuff that's going to be readily available out on the web. What is our role at S&T as we think about adversarial AI?

[00:16:34] Amy Henninger: Yeah, that's another great question. You know, this space is where Cybersecurity was 20 years ago. So, the government is just organizing to understand which agencies and organizations should be doing what, and how is this all going to work? We're in that place right now. and in S&T, I think we're basically, I'm going to say we're doing two big things, really large muscle movements. We have a mission of direct homeland security, processes and missions. But then we have this broader one for the entire homeland. So S&T right now is mostly focused on assessing systems and processes on homeland security missions to make sure they're not susceptible to these adversarial A.I. attacks. We're also thinking through broader implications, societal implications, right? In terms of misinformation, disinformation, malinformation - what kinds of things can we do to prevent those attacks? And we're trying to think of different types of scenarios for the broader population.

[00:17:39] Brittany: I like how you characterized S&T's work as these sort of larger muscle movements with supporting activities underneath. We've talked a little bit about how adversarial AI can impact both human senses and digital systems. So in our research, in our work here at S&T, is there a component of social science, of understanding, like, how do people consume this information?

[00:18:02] Amy Henninger: Yes and no. There is, it's probably not as tightly coupled as we want it to be, but we do have work in S&T going on in understanding online harms, for example, and how they propagate, and what the risk is, how to mitigate. And that would be very tightly coupled to deep fakes in the social media space, for example. We haven't connected all those dots, but it's definitely on the roadmap to happen.

[00:18:30] Brittany: Okay. I want to ask about how S&T, because you mentioned before, there's a lot of different government organizations who are working on this, making sure we're all using the same language is a good first step, but how is S&T coordinating with other agencies? I'm sure there's a defense component to this, an intelligence component to this, the ability that you can speak to it, like how are we coordinating with other organizations on this research and development?

[00:18:53] Amy Henninger: Yeah, so right now, I would say that space is loosely coordinated, loosely coupled, not formally coordinated in the R&D side. Now, there are bigger processes that have to happen just like they do in cybersecurity, right? There's going to be formal threat models that need to be developed. There are going to be incident response teams someday that have to be developed. And there's going to be an information sharing component between industry and government, right? All that stuff in cybersecurity. We're going to have that kind of stuff in the AI space, too, for adversarial AI. But right now in terms of R&D, at least, we do a lot of coordination with the DOD, research and engineering, DARPA, we do a lot of interfacing with DARPA. We're looking at maybe assessing some of the tools that they've developed to see how they can support Homeland Security missions. Two months ago, we had a workshop in the UK, a joint workshop with the Home Office in the United Kingdom, that was hosted at the Alan Turing Institute. So we stay coordinated that way. 

Brittany: So that international coordination, it's interesting that you bring that up, because that's something that, in talking with our Deputy Undersecretary Julie Brewer earlier this season, she mentioned the importance of sharing information and that S&T's role is really about building up that scientific, understanding of these new and emerging concepts. So I think it's great to hear that we're continuing to do that. Especially in an emerging space like adversarial AI, where it's about making sure that everyone has the information they need to increase their security, and we're really trying to look for information and resources from everywhere we can.

[00:20:35] Amy Henninger: You're exactly right. Just thinking about cybersecurity as a model, if there's a zero day somewhere, everybody needs to know about it and quickly to patch our systems before we all get attacked. And the same will be true here in the AI space. It's great to share information on our R&D to be efficient with our resources, develop those kind of partnerships, but we also need to share information on the threats, and on the vulnerabilities, and, that will make us all more secure.

[00:21:07] Brittany: And when we say zero day, can you let folks know what that is in case they're not familiar?

[00:21:12] Amy Henninger: Yeah, sure. So typically, in cybersecurity, When you hear people doing scans, for example, with Norton antivirus or whatever, and that happens at all different levels in a host based machine or a network. those scans are typically detecting signatures of known exploits, vulnerabilities. So. when you have exploits, vulnerabilities that are unknown, typically that's what you call a zero day. And that means you have zero days to patch your systems before there's an effective attack against them.

[00:21:49] Brittany: I think you've touched on this in talking about the threats and how this shows up in the world. But, just to reiterate for folks, why do you think the average person should care about adversarial AI and what should they be looking out for in this field?

[00:22:01] Amy Henninger: So, of the things we've talked about so far, the ones that I would be most concerned about as an average citizen, let's say, I'm not looking at this from, Amy Henninger with her S&T hat on, but just Amy Henninger, private citizen, that would definitely be the threat of deepfakes.

[00:22:20] Brittany: Yeah.

[00:22:21] Amy Henninger: Right? That would be the one that I would watch the most for, which you can think of as an extension of phishing emails, right? We've learned how to deal with phishing emails as a society, most of us, maybe not all the time, and they're getting better, right? Phishing emails used to be different, and they are getting more personalized and easier to fall for. But the deep fakes are phishing emails on steroids. and now it's faces, and now it's bodies, and it's voices, and it's the way you talk and it's the inflections that you use. And so that's what I would be most concerned about.

[00:23:00] Brittany: Your work here at S&T, what has surprised you the most about what you've seen over the years or just about how this particular subject has evolved.

[00:23:10] Amy Henninger: That's a great question. I've only been at DHS for two years. This is actually my first ever regular government job.

[00:23:17] Brittany: Ooh.

[00:23:18] Amy Henninger: Yeah. My job has always been in industry or the FFRDC space, and all the jobs I've had for the government, which I have been a government employee before, but they've always been term positions - two year job, four year job, three year job, whatever. So, this is actually my first ever government job. What surprised me the most about this was, as I dug deeper into it, and of course, the undersecretary gets credit for this because he had the vision, the intuition, the knowledge of how it impacts the homeland, he had a much clearer picture of that than I did. But as I started to dig into this, more and more, I came to understand that it really is not just an academic threat, but that it's a real threat, and it's a real threat to the homeland now. And in my past jobs, my last job specifically in the DoD, I was the Senior Advisor for Software and Security for the Director of Operational Test and Evaluation. and in that job I was the last person in the Pentagon to take a technical look at the assessments, cybersecurity assessments, and software testing assessments, of all the programs of record, on the director's oversight list. So in that job I had people from FFRDCs and labs coming to me saying, Oh look, you can put stickies on a stop sign and really mess up an autonomous vehicle. We've got to do something about this now. Or you can put this shirt on and completely evade facial recognition systems. We have to do something about this now. And those were very interesting to me in that role, but they weren't important to me yet in that role because we didn't have the fielded systems that were going to be attacked by those kinds of attacks, right?

[00:25:18] My mission was to get equipment into the field. Even though I knew that was important, and interesting, and someday it would be something we had to do, it wasn't the crocodile closest to my canoe, right? I had cyber-attacks I had to worry about. It was much easier to do a cyber-attack and take something out than it was to do a AI Based attack and take something out. So I came to DHS with that mindset. I came from that space, to a space where I'm supposed to be looking more into the future, which is S&T, right?

[00:25:51] I'm like, okay, that makes a lot of sense. I should be looking at adversarial AI now. I used to be a customer for that, for the people who are working in that space, but now I'm the person working in that space. But then as I started to understand, and think through the attacks, like we talked about, and understand the implications for Homeland Security missions, it became obvious to me that there are some missions that are at risk today because of vulnerabilities, due to adversarial AI.

[00:26:19] Brittany: And T&E is testing.

[00:26:21] Amy Henninger: Testing and evaluation.

[00:26:23] Brittany: There we go. I know we love acronyms, but it's always fun to just let people in on the secrets we're really talking about,

[00:26:28] Amy Henninger: I'm glad you're doing that. Thank you.

[00:26:30] Brittany: It sounds like you've had a really rewarding and very interesting career. I wonder if you can walk us through a little bit more of what brought you to S&T. 

[00:26:38] Amy Henninger: Yeah, I would love to. Let me start going backwards. So the reason I took the job at S&T. Early in my career I'd been the person in charge of AI. So I worked at an AI company right out of graduate school for about five years. And this is before AI was cool, right?

[00:26:56] Brittany: you were on the ground floor.

[00:26:58] Amy Henninger: I was doing AI before it was cool. So that was a great job, but I got to be an AI person, right? And I got to be a software person. I got to be a cybersecurity person. I got to be a modeling and simulation person, a live, virtual, constructive person, a data analytics person. And where I come from you can call yourself an expert if you've got 10,000 hours in the field, right? And I am unfortunately old enough that I can say I have 10,000 hours in all those fields. 

Brittany:  I don't think that's unfortunate at all. I think that's a huge benefit.

[00:27:28] Amy Henninger: You're very sweet, thank you. So I got to do all those in my career, and they're not orthogonal to each other. There's a lot of overlap, right? But I got to do all those things. And there was a job in S&T, where you got to be the branch chief over a portfolio that includes all those things. I was the branch chief for the advanced computing branch. That's the job I hired in for. And in the advanced computing branch, we had a tech center in cybersecurity, and a tech center in artificial intelligence, and a tech center in data analytics, and one in quantum information sciences, and one in modeling and simulation, so that was the first time in my career. I got a chance to do it all right. I got to do all of those things at once, which was great fun. I got to bring my experience from earlier in my career, having done all those things before and think about where there are intersections between those things, where there are seams between those things. So that's why I took the job in S&T. I can work in any of those spaces, which is awesome. But the one I'm working in right now is at the intersection of AI and cybersecurity, Adversarial AI, which is a very comfortable place for me to be in. I really like it a lot and it still lets me work across a couple different fields at the same time.

[00:28:49] Brittany: I think that's fantastic to hear that, you get to this position where you really get to apply all of the knowledge that you've built over your career. I think that's like particularly rewarding and rare. So congrats on that. Is there anything that you had in mind that you wanted to be when you grow up? Like when you were envisioning your life, little Amy, were you like, Oh man, I really can't wait to work in AI and information systems. What did you want to be when you grew up?

[00:29:14] Amy Henninger: When I was a little kid, I didn't really have a one thing when I was a little kid, but I went to college thinking I was going to be a high school math teacher and a counselor, a high school counselor. That was kind of like where I started. I love to learn. I think that maybe is in my greatest strength and greatest weakness at the same time. So, I love to learn, and I would learn something and I'm like, Oh, I want to learn more about this. So, I would, go get a degree in this one. And then I would learn more. And the more I learned, the more I realized there's so much I don't know.

[00:29:49] Brittany: It's very humbling and exciting when you find that little thread of information that you're like, Oh, I have to know more about this. Where does this go? What's possible here?

[00:29:57] Amy Henninger: It's like I have this big map and I'm trying to connect dots and fill in gaps. That's one of the things I like about my job now is that I get to learn so much, doing research. You have to do a lot of scrubbing of the literature and figuring things out, and I really enjoy that part of the job.

[00:30:16] Brittany: Amy, just getting to know you a little bit better, where are you from and what's something that folks might be surprised to know about you?

[00:30:21] Amy Henninger: So originally I am from Downers Grove, Illinois, a suburb of Chicago, a western suburb of Chicago, not too far from Argonne Labs. Yeah, so that's where I grew up, spent my life up until high school years, and then I moved from there further south in the state, to a place called Glen Carbon, which was another suburb, but on the Illinois side of St. Louis, and that's where I finished my high school years.

[00:30:49] What's something people would be surprised to know? Gosh, that's a tough one. What would people be surprised to know about me? So I trained with the NSA cyber red team for 200 hours for a month and a half, most people don't know about that. That was a ton of fun. So, I have six degrees. Most people don't know that.

[00:31:12] Brittany: That's impressive. 

[00:31:13] Amy Henninger: Yeah, I said I like to learn, right? So yeah, So that’s a little over the top, maybe, but…

[00:31:20] Brittany: Are they all undergraduate, or is there a mix?

[00:31:23] Amy Henninger: A mix.

[00:31:24] Brittany: Ooh, what's your favorite one?

[00:31:26] Amy Henninger: My favorite one. I think my favorite one is my computer engineering degree. That was the one I enjoyed the most. Yeah, they're all good, they were all fun. But that's my favorite.

[00:31:36] Brittany: I know. When I retire, I do want to just go back to school and be like, I think I'll learn a little bit more about this over here.

[00:31:43] Amy Henninger: Are you that way too? So what did you study in school?

[00:31:46] Brittany: I studied history in undergrad and international relations in grad school.

[00:31:50] Amy Henninger: Fantastic. Do you have a special thing in history? That's like your favorite?

[00:31:55] Brittany: I focused a lot on British history, and so I translated that into US/UK relations, on the international relations side, so I think it was just because I could read all the primary documents since they were in English.

[00:32:06] Amy Henninger: Nice

[00:32:07] Brittany: Oh yeah, can you tell us what the six degrees are? 

Amy Henninger: Yeah, sure. I told you earlier I wanted to be a high school teacher, a math teacher, so I have a math degree, and I have a psychology degree. That was my plan to be a counselor. And then I learned about something called human factors engineering, which sounded very interesting to me.

[00:32:23] I learned about that in a psychology class and they said, If you want to be a great human factors engineer, you would have a degree in psychology and math, and industrial engineering. I'm like, huh, okay. So I got a degree in industrial engineering and then I got a job, a real job in, I was doing a lot of work in simulation, but for process optimization kinds of simulation. Turned out I had developed, basically developed, a simulation model of a reconfigurable F15, F16 cockpit simulation, which was a training simulation. I was developing a simulation model of it, how to build a simulator, basically. And I thought, oh, wow, simulators, this is a whole new thing in simulation I didn't know about. So I wanted to learn more about that, and that's how I got into computer engineering. and then I switched jobs and got my PhD, and then I moved into AI, right? Cause then you can, all of a sudden, you can combine simulations with AI, which is what I wanted to do next. So that's how I got into computer engineering and AI. So my two master's degree are engineering, management, and computer engineering. My PhD is computer engineering.

[00:33:34] Brittany: Wow, I think you might have the most degrees of anyone we've had on the podcast so far.

[00:33:38] y Henninger: When I was working at the AI company I told you about, I opened an office for them, a profit and loss center. And the first guy I hired, who was an intern, and he was working on his fifth degree. He's such a good guy, I just adored him. He was such a good worker, and he was working on his fifth degree when I hired him, his PhD actually. I used to tease him, I'd say, you're bringing down the office average, buddy. And we still have a good laugh about that, he's a good kid, but not a kid anymore, but yeah, we laughed about that a lot.

[00:34:09] Brittany: You gotta get those numbers up. Those are rookie numbers.

[00:34:12] Amy Henninger: You need one more, come on.

[00:34:14] Brittany: Exactly. Get with the program.

[00:34:15] Well Amy, thank you so much for being with us today. This has been an absolutely wonderful conversation.

[00:34:21] Amy Henninger: Yeah, likewise, thank you, Brittany. You made it easy.

[00:34:24] Brittany: And thank you all for listening to Technologically Speaking. Tune in next time and we will see you on the next episode. 

[00:34:29] Dave: Thank you for listening to Technologically Speaking. To learn more about what you've heard in this episode, check out the show notes on our website, and follow us on Apple Podcasts and YouTube, and on social media at DHS SciTech. D H S S C I T E C H. Bye! 

Last Updated: 10/09/2024
Was this page helpful?
This page was not helpful because the content