U.S. flag

An official website of the United States government

Government Website

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Safely connect using HTTPS

Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

The Three-legged Stool

Image
Technologically Speaking. The Official S&T Podcast

Host John Verrico sits down with Arun Vemury, program manager in the Science and Technology Directorate’s (S&T) Biometrics and Identity Technology Center, to discuss facial recognition performance and fairness. They dive into how the technology works, how it has changed over time, current challenges and biases and what S&T is doing to overcome them. Arun and John also touch on whether facial recognition technology can distinguish between identical twins and the need for cameras that are designed to better accommodate the diversity of human faces and skin tones.

 
Run time: 34:27
Release Date: August 10, 2022

Show Notes

Guest: Arun Vemury, Program Manager, Biometric and Identity Technology Center

Host: John Verrico, Chief of Media & Community Relations

[00:00:00] Verrico: Hi, I'm John Verrico and I work for the Department of Homeland Security Science and Technology Directorate, or S&T as we call it. Join me and meet the science and technology experts on the front lines, keeping America safe. This is Technologically Speaking.

Welcome to this episode of Technologically Speaking. I'm one of your hosts, John Verrico. Today I'm joined by Arun Vemury, a program manager in the Biometric and Identity Technology Center here at the Department of Homeland Security Science and Technology Directorate. During today's discussion, we're going to be talking about the topic of facial recognition, performance and fairness. Welcome, Arun so glad you could meet with us today.

[00:00:44] Vemury: Thanks, John. Thanks for having me.

[00:00:46] Verrico: Arun, you're one of my favorite people here,

[00:00:48] You know I do always enjoy talking to you because I've learned so much. How would you describe biometrics and specifically facial recognition technologies? If you were trying to describe those to maybe some up and coming, maybe some high school students that might be interested, or even if you were trying to describe it to your 80-year-old aunt Tessie at home?

[00:01:09] Vemury: Hmm. Okay. Well, I think we kind of go with it this way. Like it's really about trying to, identify or verify a person. So, if you think about it historically, there's been like a lot of different ways that people usually assert their identity. Right? You have different things like you might have, let's say a driver's license, right? You have a card that says, like, this is who I am. So, we think about that in terms of what you have. And another way you could go with it is, you know, something, you know, you think about like the speakeasies during prohibition and the way you'd get into a secret room was to know the password.

[00:01:42] Vemury: Right. Maybe I guess a more contemporary example would be like, knowing your pin or your password for your current account. And then the other one that's often cited is something that you are, which is where biometrics come in to play. So, it's part of this kind of three-legged stool, if you will, and biometrics are like the physiological or behavioral characteristics of a person that are relatively unique. So, your face, to a degree, your fingerprints, your iris images, any number of different characteristics about an individual, how you sign your name. All of these things could fall within the category of biometrics for the most part. And it's something that's unique to you because of your physiology and maybe how you, express that physiology like with a signature.

[00:02:22] Verrico: That is brilliant. And as long as I've worked with you Arun, you never actually broke it down like that before. And I'm thriving on this, I'm loving it. It's what you have, what you know, or what you are. And that really is a, it's kind of an eye-opener and very easy for us to now understand. Moving into now, understanding there's different types of biometrics, but focusing principally on facial recognition, what do you find kind of the most interesting aspect of that?

[00:02:51] Vemury: For me personally, I think, what really is very interesting is how it's changed over time. When I first started my career working on this technology a number of years ago, the universe was really fingerprints and iris was up and coming. And face was always kind of like interesting, but never worked well enough. And it was always, really error prone and, it was, you get like false matches, false, non matches. It wouldn't work quite right. People age, there were problems. In the last couple of years, we've seen this massive improvement in how accurate the technology is. And it's really because people have learned how to apply things like machine learning to the problem set.

[00:03:28] Vemury: Once upon a time there were these photos that we would take and we're like, oh, this isn't a very good photo. This isn't going to work for facial recognition. Those lousy photos are actually really good photos now that the matching systems and the algorithms and the software are very robust. They take, they look at these images in a different way than the human brain. And if you talk to any person, we all think, we're pretty good at recognizing people. But it turns out some of these algorithms are just a whole lot better than we are. And it kind of puts a lot of things in perspective and it's just kind of fascinating to think about.

[00:03:58] Verrico: Well, that's super interesting. So, can you get into the kinda geeky weeds of how this technology actually works? How does facial recognition technology recognize a face?

[00:04:12] Vemury: Yeah. so great question. You have a camera system. And I mention that because, we also need to think about how the camera influences the performance of the system. But the camera makes a digital photo. The digital photo can be fed into, we say facial recognition software, but it's actually multiple parts. The first one actually creates what we generally call it like a template in the biometrics community. That means we take this image, and we turn it into the unique characteristics.

[00:04:38] Vemury: So, what ends up happening is we, give a lot of images or, the industry gives a lot of images to these computer models. They run over all of the different datasets and they try to figure out for themselves. What are the unique characteristics of those faces? And it's not always, again, human perceivable. Like we would say like, the distance between the eyes. And it's not really clear what exactly the algorithms are picking up on sometimes. It's a lot more complicated than that. But anyway, they create their own version of like these feature vectors or these templates.

[00:05:12] Vemury: And then that information, these templates are usually about, I don't know, a lot smaller than the original image. So, you could very easily have, like, three mega pixel image and you're shrinking that down to like a few kilobyte template. So, you go from that large file and then you make it like maybe a thousand times smaller or something. And that now has all of the relatively unique features of that face. And then what happens with facial recognition is we're taking all of templates that are created from different images and comparing them and trying to figure out which templates are the most similar and are likely to belong to the same person.

[00:05:46] Verrico: Well, that's a, it's pretty cool. So, you know, you were talking about kind of unique characteristics, so I couldn't, I can't help, but ask this, can facial recognition technology actually determine a difference or identify a difference between say identical twins?

[00:06:04] Vemury: Yeah. So that's a really good question. Let me answer that a little bit broadly, right? So, let's say that we're talking about biometrics in general. There's some types of biometrics that are random, actually. So they, they kind of, they aren't necessarily genetically underpinned or maybe that relationship between your genetics and your biometric information is not tightly coupled. And the example there would be like fingerprints that you could have. You have 10 fingers, none of your 10 fingers have the same fingerprints. Your irises, right? So, your left and your right iris are actually different than unique. So, these are kind of randomly formed, characteristics.

[00:06:41] Vemury: And because of that, they're very different, very unique, and they will differentiate between you and your siblings and your identical twin. Facial recognition is a little bit different, right? Because your face really is, it's like it's based off of your genetic characteristics and information and, you get these characteristics from your biological parents, right? Like, people say like, oh, they have so-and-so's nose and so-and-so's eyes. So, with identical twins in particular, they're about as close as you could possibly. I have a very long answer for this, but probably just keep it short. So what generally happens is when we take these images and we try to create that template, you don't need a lot of very high-resolution information to go from the face image to that template.

[00:07:22] Vemury: To really distinguish between, let's say, identical twins. You actually need a lot more information. And this is where we probably might need to look at facial recognition systems that actually use skin texture information. They need much higher resolution imagery. So, definitely believe it's possible and likely, and I think there were probably a couple of matchers maybe a few, several years ago that probably would have done a better job at this than some of the machine learned algorithms because the algorithms that use machine learning haven't needed that level of detail to get to the level of accuracy they're currently at.

[00:07:54] Verrico: Ah, interesting. So, what, so that looks like it could be potentially a challenge area. So, what is the biggest challenge in facial recognition technology?

[00:08:06] Vemury: I'll be honest with you. There's a lot of challenges with facial recognition technology. Some are technically oriented and honestly some are like more perceived or have to do with how people perceive the technology and how comfortable they are with aspects of facial recognition technology.

[00:08:21] Verrico: Uh, yeah. While we're at it, let's talk about that too. Like, what's the biggest misconception about a technology like this.

[00:08:29] Vemury: Right now, there are a lot of technologies out there for facial recognition. And they have different parts and different components. So I think one of the challenges is right now, people want to say the technology is either good or bad. And the point is actually, no, you see a range of technology, right? If I took like 200 3rd graders and gave them a math test, some would get an A, some would not get an A, might get much lower scores.

[00:08:53] Vemury: The same type of thing actually happens with these technologies too. These broad generalizations about how they work or whether they work well or whether they're fair and work well for all people. It doesn't fit into a single soundbite. the truth is far more complicated. And I think the point here is for a lot of folks working in this area, it's really about, okay, well we need to test it. We need to pick the best ones and we need to make sure that we minimize errors wherever possible. And a lot of that is really is possible. This is not like, a situation where the technology just flat out doesn't work. It works incredibly well compared to where it was before and is rivaling a lot of the other biometric technologies that have been out there for a number of years too.

[00:09:31] Verrico: Arun, you've opened the door for a new area of discussion here and by mentioning fairness. And fairness is definitely one of those public perceptions about facial recognition technology. And some of the things that have possibly come up in discussion are about the ability of facial recognition technologies to be able to distinguish, people of different ethnic groups. And so that there may be a higher rate of error with people of darker skin tones and things like that. How does this bias occur and what are we looking at?

[00:10:07] Vemury: Yeah. So you're getting to a really, interesting and honestly, a really thorny topic. We, we want the technology to really have no errors, right? We want the error rates to be zero. In reality, the error rates, aren't zero. There's a non-zero error rate, but we want to make them as small as possible.

[00:10:24] Vemury: We want to make sure that technology works for everyone, regardless of their circumstances. And in particular, you want to make sure that you're not causing any harm or anything because of things like protected classes, someone, based off of gender, race, age, or ethnicity, any of these different factors. One of the things that we're doing here at S&T is we're actually working on the international performance testing standard to measure the performance of biometric systems, with regard to different demographic groups. How much difference might we see? How do we measure performance?

[00:10:54] Verrico: Ahh, yeah.

[00:10:54] Vemury: One thing that we've generally is a little bit more challenging as in while there are requirements to be fair in these sections, there aren't always quantitative tests for how you determine whether something's fair or not. In this area that we work in with facial recognition and biometrics in general, there's a lot of engineering. We have a lot of data. I'd argue, we have more data than maybe some of the folks do who, who work in these other spaces like employment or housing. And so, we can actually get really fine-grained information on how well the technologies work.

[00:11:24] Vemury: The question is, something like 0.95 versus 0.94. If we see a slight difference, is that a problem? Is that okay? So, kind of that tolerance for how we assess fairness. First of all, we want to make sure we're testing performance the same. And one of the things we look at doing is now we need to break out performance, not just based off of general characteristics. We really need to do it based off of these demographic cohorts, where we see these differences in performance. And then how much of a difference is still okay. Like if we flip a coin 10 times in a row, sometimes we might get heads six times. Sometimes we might get heads four times. We might get a little bit of difference there in the measurement and how much are we okay with?

[00:12:05] Vemury: These things are kind of more random oriented. With the face photo, right, part of what you see is what's visible is, the skin color or the skin tone of an individual, you might be able to perceive gender, maybe not. You might be able to perceive a number of different characteristics from the face photo.

[00:12:22] Verrico: Excellent.

[00:12:23] Vemury: The algorithms are figuring out for themselves, what are the unique and distinguishing characteristics of images? And in some cases it looks like a fair number of them use some of these characteristics, like a skin tone or gender, or, ethnicity, or age to some degree. Some do it more than others, and some actually appear to, use these characteristics very little. But what ends up happening is a lot of times when you get a photo or you get a potential match, right, he photos we'll share these common demographic characteristics. Maybe the simple way to do it is, to dejargonize this, I grew up in the eighties and nineties.

[00:12:59] Vemury: So, I used to watch cop TV show sometimes. And, whenever there was a suspect, they would describe people, in terms of their gender, their race, maybe their clothing, their height, their body, body type, athletic build versus, being skinny or slightly overweight.

[00:13:14] Vemury: So this kind of information is how people distinguish between people. I don't think we should be surprised that the algorithms are doing this too, especially when we just let them learn whatever it is they're going to learn. I, I have relatively dark skin. I'm Indian, I'm from South Asia. I'm more likely to false match against somebody who is also a male, who's also roughly 40 years old and has darker skin tones. And this actually happens for all people, right? If you're like a 20-year-old Caucasian woman, you're more likely to false match against another 20-year-old Caucasian woman.

[00:13:49] Vemury: What ends up happening is in some cases, some of these differences are a little bit off, right? Some case cases, the error rates are a little bit higher for one demographic cohort versus another demographic cohort. So, we were taking photos of me, let's say a 40-year-old, South Asian male and comparing them to an 80-year-old Asian, let's say a woman from South Korea. And we would use that in our overall calculation and, looking back probably that was probably okay at that time because the performance rates were so bad.

[00:14:21] Vemury: It didn't really, but now that we've got to this really amazing, boost in performance, we need to break it down more accurately. Because if I'm trying to get through, let's say I'm trying to get into a bar and be able to buy beer. And I'm 18 years old. I'm not going to use, an 80-year-old woman's photo. I'm going to use a photo that looks more like me that shares those common demographic characteristics. Because that way I'm more likely to, because that's how people recognize people. So the idea that we're comparing, like this is something every high school kid who uses a false, a fake ID probably knows, right. Is they want to use an idea of someone who looks kind of like them who shares their same demographic characteristics. But the way the community for a long time measured performance was not looking or considering these type of demographic factors in evaluating the performance of these systems.

[00:15:07] Vemury: So, nowadays we are, we're doing this in part in largely a lot of our evaluations for the last several years. Breaking up performance or dis-aggregating them based off of demographics. So, we have a better insight into how well these things work, for specific races, specific skin tone, specific gender. So, we can understand whether or not they're working well.

[00:15:25] Verrico: Right.

[00:15:26] Vemury: I think the other thing I want to point out, sorry, I'm kind of rambling here is everybody focuses on the matching algorithm and that's only, again, one part of the overall system. In a lot of our testing, we test with camera systems too. And we actually find that more errors happen due to the quality of the image that comes out of the camera or the fact that no image comes out of the camera in some cases, because face detection didn't work properly. So, the camera really is important, not just a matching algorithm. We need to evaluate that as well. And then, another place where these things could come into play is where are the images coming from? And do they represent the population who are using the technology?

[00:16:04] Verrico: So really what can be done then about this and tell me, just briefly what we're doing in the Science and Technology Directorate and in your center, in the testing criteria to try to see if we can overcome these challenges?

[00:16:18] Vemury: Yeah, so great question. And I think there's a lot of things that we're looking at and we're trying to do here. First of all, we want to have better approaches to testing, so that we can tease out these various factors and really be able to pick technologies that work well for everyone. If there are issues or challenges that need to be overcome, identify those so that we can mitigate them. Right. Work on being very selective. Don't pick from just randomly from 200 matching systems. Pick the best ones that work well for the wide diversity of people who are probably going to be part of those systems. So, we make sure that, it works well for a lot of people.

[00:16:56] Vemury: Camera systems that do a good job of taking photos of people regardless of their skin tone. Some camera systems do better with lighter skin people, some photos, some camera systems do better with darker skinned people. You see a little bit of a difference there. And I think part of the challenge is, we've had, we take cameras, and we use cameras that are kind of general purpose, right? The cameras we buy, can take photos of landscapes, can take photos of birds, can take photos of potted plants, any number of things. I think we need cameras that are designed to take really good photos of faces, and really better accommodate the diversity of human faces and human skin tones.

[00:17:31] Vemury: If you think about it for fingerprints, you know, we don't just use any old device for a finger. We have dedicated scanners and devices to take a fingerprint. If I were going to use my irises, I would go to a dedicated camera that is really good at taking iris photos. We need to do the same thing probably if we want to get to these really great levels of matching performance or facial recognition cameras too.

[00:17:52] Verrico: Other than the cameras that are in our phones, which are using basic facial recognition technologies to open up your phones and things like that, in the kind of the security operational world, are they using better cameras or are they still somewhat using more general cameras?

[00:18:11] Vemury: I think we're starting to see a transition. I think a lot of the cameras that are used now, were more general-purpose cameras that are kind of being integrated into these systems. But there seems to be like this conscious effort to now change that. A couple of actually at this point, maybe about two weeks ago, Google talked a little bit more about how they were working on making more inclusive technologies. They actually cited some of our research in their announcement. And it was talking about, how the computer vision community thinks about inclusion and tries to anyway, long story short. There are companies out there who are trying to work on camera systems that better represent the diversity of people or better represent the facial characteristics of the diversity of people and make them more true to life. They're going to think more about, making sure that they get better images from the diversity of people who actually end up using the technology.

[00:19:01] Verrico: So, I want to definitely touch on a couple of other things here. When you talk about perceptions of this technology, there are a lot of misconceptions out there too. And one of the things that always pops up. The question that pops up quite often, is potential privacy risks associated with using facial recognition technologies and algorithms to determine who's who and that kind of thing. So what can we talk about here in how we're protecting privacy?

[00:19:31] Vemury: Yeah, that's a good question. And I think the point, I guess the answer there is, a lot of things kind of vary depending on who's using the technology and how they're using the technology. Within DHS, obviously one of the things we have to go through is a lot of privacy compliance documentation describing from whom will the data be collected? How is the data collected? How's the data secure and how is it used? And then when is it disposed of? So a lot of these things get documented, and they get looked at and reviewed and we kind of analyze it. Does it have to be done that way? Are there things that we can do to minimize the amount of data that's collected or how long we have it?

[00:20:06] Vemury: So there's a lot of things that the government does in particular to try to, to be very conscious and cognizant of the privacy implications or what happens if the information is compromised or is leaked in some way. So, it's going to be interesting to see how this continues to evolve and how, state and federal, and even just different countries around the world, how they handle the privacy or this PII, personally identifiable information. Because if you think about it, we're all very sensitive about our face images, especially when governments have access to them.

[00:20:38] Vemury: Me now looking back, it's like, wow, I took my photo and put it on social media all over the place. So, there's this thing where we've taken our photo and we've put it out there. Not really anticipating people might use it for facial recognition. And now, looking back, it's like, maybe that wasn't the best thing. But you know, I guess the point here is it's complicated. I don't have a clear answer for it. It varies from organization to organization, from state to state, from country to country. So, you see some organizations, some countries where the privacy laws are really stringent and other countries where the privacy laws are quite lenient, or maybe don't even exist.

[00:21:16] Verrico: You know, you bring up a really good point. When you mentioned, you talked about how widespread the use of digital imagery is and how we put our pictures everywhere. And now so many places are going to things like mobile driver's licenses, where you now have digital imagery there. And I know that the Biometric and Identity Technology Center is also starting to work on the mobile driver's license issue. So, can you talk a little bit about, what a mobile driver's license actually is and how it works?

[00:21:47] Vemury: Yeah, sure. Mobile driver's license, the credit for this really belongs to, I think, state governments in the US in particular. I think they can receive funding and grants from, NIST, the National Institute of Standards and Technology. Many years ago, till think about trusted identities. And they've been working on this for a couple of years. And in, in the last couple of years, they've been kind of working on standards for these technologies. So, essentially what it turns out to be is how do you represent the information from your driver's license into like personal electronic devices? I have my phone on me all the time. I might leave home to go down the street and if I forgot my wallet, not terribly comfortable, but you know, I'll be able to only be out of the house for a little while. So, that's okay, I'll just leave that behind. If I forget what my phone, I will turn around and go back and get it, no matter what.

[00:22:39] Vemury: It's kind of turned into this thing that we can't leave home without it. Anyway, so how do we take that information? How do we represent the plastic card, but on your smartphone? So that's kind of where this goes and how the standards are around this are kind of evolving, is the ability to be able to submit or provision a driver's license or an identity card to your phone. How does somebody exchange identity information using your phone? Like right now we have things like, Google pay, Apple pay, all of that sort of stuff where you tap your phone or maybe you scan a QR code and you can pay things directly from your phone and not have to pull out your wallet.

[00:23:13] Vemury: Well, the question is like, how do we do these identity transactions? Because that's what a lot of these things are, right? When I go and, go to my grocery store. Yeah. I could scan my payment, but you know, maybe also want to get my rewards for it. I want to get a discount. I want to prove that. Yeah, I'm a member of your frequent shopper club. So, we share a little bit of identity information. Maybe I just scan, like I have just have a separate ID with like a little barcode or QR code on it. And that's how I represent my information. If I go to the, the liquor store around the corner, and I want to go grab a six pack of beer, what do I do?

[00:23:44] Vemury: I go in and I show my driver's license, my plastic card. And on that card has a lot of information about it. It's not a dedicated, it's a reusable ID, which I think is a nice step, but I also now release my full name. I release my total date of birth. I release my address. All of this stuff is visible and available to the person who only really needs to know that I'm over 21. MDL is a little bit different because now we're in a digital format. We can actually do these things called like selective disclosure, which means I only release the information I need to, to the person who's asking for it. So, I walk in with my smartphone and all I need to do is prove I'm over 21. And I just select that option and only release the data that, says that, Hey, this is a person that person is over the age of 21. You don't even need gender information. And this has been verified or determined by the state of whatever.

[00:24:35] Verrico: You know, I had no idea that there was a selective disclosure on these, up and coming, mobile, driver's licenses, mobile IDs. That is, that's actually fascinating. And that really kind of answers the, where I was going to go and kind of tying it back to privacy issues, but I guess, with the selective disclosure aspects of it, that's pretty brilliant and it kind of gets ahead of it. So overall, there are challenges to facial recognition technology. Yes. There are challenges to the acceptance of facial recognition technology in society, understand that as well. But there are true benefits to the technology. And if you had to kind of summarize what those benefits would be to the average citizen, what would that be?

[00:25:20] Vemury: I think the quick soundbite is that the technologies have gotten very good. And as long as we're picking the right technology and using it in the right use case, we can actually see a lot of benefit. Some of these algorithms, some of these software, work a lot better than the alternative, right.  Before facial recognition, what would we do? We'd have like an eyewitness, right? We'd have a person. And we said like, yeah, that's the same person I saw running out of this building. Well, it turns out people aren't very good at that particular task. In fact, there's a whole lot of evidence out there that suggests, people have been wrongly convicted based off of eyewitness testimony for a very long time.

[00:25:55] Vemury: With a lot of these technologies that are testable, we can verify the performance. We can understand the error rates and the accuracy and where there might be things like, differences in performance based off of demographics that we can't really do when we're talking about a person or a human being. It's hard to test people in that way. So, we have these technologies that are testable verifiable, and they provide a lot of value to help make sure these systems are actually, more tuned. We really understand how well they work and hopefully work fairly, at least compared to a human performance here.

[00:26:26] Vemury: So I think it's kind of really about putting this in relative context to one another. And I would also kind of caution that, as it is, within the S&T Directorate, we're not an advocate for the technology. If any of the ways I see myself kind of, as one of its biggest critics, I kind of poke holes in it wherever I can. Let's say we need to make this better. There are some use cases where the technology really isn't still ready for certain applications. So there are places where it makes a lot of sense. If I'm walking up to go through a TSA checkpoint and I'm just being matched to my ID or, to my pre-check information, like that's a really simple use case that works really well.

[00:27:00] Vemury: Unlocking your smartphone works really well. There are other use cases where you might be searched against, you know, like every person's photo on the planet, in which case the error rates might are going to be a little bit higher. And if you're taking a negative consequence, like you're going to figure out who to arrest or who to detain based off of that kind of decision without a human in the loop, that could be problematic. So, we're not at the place where we're looking at lights out surveillance, face recognition technologies right now. The error rates are still too high. But I think the important thing here too is so facial recognition technologies work. I think we should be very cautious about, facial recognition surveillance technologies. And for that matter, honestly, any surveillance technologies, the error rates can get quite a bit higher there.

[00:27:46] Verrico: So, thank you, Arun. To kind of bring this back to where we started earlier in this conversation. So how does it appeal to that geeky Arun Vemury, who is growing up as a math geek?

[00:28:00] Vemury: Yeah. it does, to be honest with you, I think there's really two things here. First of all, this stuff is very complicated. The idea that we could really simplify this and make it as the technologies that are good or bad, it doesn't really hold. It's a lot more nuanced. It's a lot more complicated. That being said, there's a lot of value here too. And we should understand where it provides additional value, where it provides additional safeguards, and where maybe it's still not ready. And unfortunately, for a lot of folks, like we talked about those 200 students, some of them are good at math, some of them are not so good at math, you kind of really need to understand maybe the math to inform better policy, better laws, better design of your systems so that you really kind of give this stuff the best chance of working. And figure out where, you know, what it really isn't ready for this for a particular use case. So I love it. I think it gives me a chance to kind of dig into it and really try to hopefully do a better job explaining how these things work or where it is ready for use, and honestly, where the math doesn't really support its use. So hopefully that helps.

[00:29:01] Verrico: I'm just going to throw this out there. Were you kind of a nerdy kid? 

[00:29:05] Vemury: I was definitely nerd. I mean, I played sports and stuff as a kid, you know, like, uh, little league and played soccer and all that sort of good stuff, but I was definitely a nerd. Like, all the school stuff and scholastic stuff. I was a pretty good student, but at the same time, like I loved just like taking things apart and like breaking them and then scrambling to fix them before my parents realized that I had broken them. That kind of stuff. So, you know, I was kind of a lot of a self-learner, trying to figure out how things worked and definitely, very academically inclined. I was a math kid, so I really liked math. And then I kind of jumped into any sort of science class that I got into. So I definitely, when people say the word nerd, I looked at it fondly and I embrace it for sure.

[00:29:43] Verrico: Well, of course, nowadays a nerd is kind of a badge of honor. I mean, everybody's proud to be nerds. We're so proud to show off our nerdiness.

[00:29:51] Vemury: As you grew up and evolved into your career, how did you ultimately wind up getting into the biometric area? I'll be perfectly honest with you. It wasn't really intentional. I went to school and I really liked working on computers and I in school, I decided to become a computer engineer, working on the hardware, working on the software. And, and came out of school and I was working primarily as a computer programmer and I had this early project where we said, we're going to try it to put together, like a little prototype system that works with an iris camera.

[00:30:20] Vemury: And I'm like, that sounds so cool. So, I started playing with it and started kind of figuring out how the things worked and where it wasn't working. And I was like, huh, wouldn't it be cool if we could fix this or make something work a little bit differently. And yeah, all of a sudden I went from like working on a prototype to thinking about technology and thinking about standards, which are really, nerdy stuff. Like you know, it's, it's one thing to talk about, oh, we're going to make this technology work. When you start talking about standards and standards meetings. And, oh my gosh, that's like a whole other level of nerdiness. And so, I was in that realm for a number of years and then kind of transitioned over to research. When I realized we needed better research to make better standards and then make better technology.

[00:31:00] Verrico: And I'm just going to wrap this up with a final question, what impact do you feel that you are making on society?

[00:31:07] Vemury: Um, that's a great question. So, to be honest with you, like, some of these technologies we work on, they're really important. They provide a lot of value and they can help a lot of people. Some of them also could be misused, right? So, having folks involved in the process who are going to look at them objectively and fairly is really important, so that we could try to figure out how we as a community, how we as a nation want or think we can use these technologies to hopefully provide, support the society that we live in. For kids, I think there's a quote and I'm going to have to struggle with this one again. Technologies are neither, neutral or not neutral. I'm paraphrasing here. So, so we really do need to be thoughtful in how we use these things and to do that, we really need to be willing to kind of try them out, to test them, to figure out what works and what doesn't work before we start using them every day, all the time. So, I think it's good to be curious.

[00:32:05] Vemury: I think it is good to try things out. It's good to learn about experimentation and trying and figuring out what works, what doesn't work. Realizing that it's okay that things work under some circumstances and maybe don't work under other circumstances. My car. It works perfectly well driving down the street. I'm not going to drive it through like a sandy desert. It's not going to work very well out there. So, so it is about kind of putting these things together, finding out where they work, where they have value and honestly, being a good steward and trying to find places where we make sure that we're using technologies and using these things in the right way too.

[00:32:39] Verrico: That's terrific. Thank you so much Arun. Really appreciate your time today. And thanks for joining me on this episode of Technologically Speaking. I really enjoyed learning more about facial recognition technology, and hearing what's on the horizon for the Biometric and Identity Technology Center. I look forward to any time I can chat with you.

This has been Technologically Speaking, the official podcast of the DHS Science and Technology Directorate. To learn more about S&T and find additional information about what you heard in this episode, visit us online at scitech. Dhs.gov and follow us on social media at DHS SciTech. Thanks for listening.

Last Updated: 04/19/2024
Was this page helpful?
This page was not helpful because the content