It’s projected that cybercriminals will steal an estimated 33 billion records in 2023. How worried do we need to be about identity theft? Are our medical records secure? And what about keeping people from hacking into our phones? Our guest for this episode is Angelos Keromytis, professor in the School of Computer and Electrical Engineering and co-director of the Center for Cyber Operations Enquiry and Unconventional Sensing.

Header Image
Keromytis headshot
Album art
Host

Steve W. McLaughlin

Georgia Institute of Technology
Provost, Georgia Institute of Technology
Professor, Electrical and Computer Engineering
Steve McLaughlin, headshot
Guest

Angelos Keromytis

School of Electrical and Computer Engineering
professor
Angelos Keromytis headshot

Audio

Audio & Captions

Download Audio File

Transcript

[radio static]

Man: [radio transmission] Worms, viruses were described in biology textbooks, not police reports. Today, terms like these bring to mind crashed networks, massive disruptions in communications and infrastructure systems, and billions of dollars in damages.

Steve McLaughlin: It's projected that cybercriminals will steal an estimated 33 billion records in 2023. How worried do we need to be about identity theft? Are our medical records secure? And what about keeping people from hacking into our phones?

[steam whistle]

[applause]

[marching band music]

This is The Uncommon Engineer.

[music]

Man: [archival recording] We’re just absolutely pleased as punch to have you with us. Please say a few words.

[applause]

Steve McLaughlin: Our guest today on The Uncommon Engineer podcast is Professor Angelos  Keromytis, professor in the School of Electrical and Computer Engineering and co-director of the Center for Cyber Operations Inquiry and Unconventional Sensing. Welcome to the program, Angelos.

Angelos Keromytis: Thank you.

Steve: You know, from my side, you know, I get these emails that are phishing attacks where someone is trying to get me to log in and use my password into a fake website. And from there, they've got my login and password and can do all this kind of stuff. I think that's the kind of experience that many people know about and have, but it's so much broader than that. So can you start by talking a little bit about the kinds of attacks that are taking place, how vulnerable we are? And because there's a lot of kind of techniques here, how cybercriminals eventually make their way into systems?

Angelos: Yeah, it's a great question, and it is a complicated one in many ways, as you might expect. So, as you pointed out, a lot of things— there's a lot of FUD, fear, uncertainty, and doubt, that is promoted in the news and also by the security industry as to what's going on and how vulnerable people are and so on. The truth of the matter is, we have two general tracks, as it were, of bad behavior on the internet. One is profit-motivated, so that's your run-of-the-mill criminal, or maybe not so run-of-the-mill, that is trying to make money any way they can. And the second— and it's becoming increasingly prominent— is nation-states playing on the internet.

So I would say for an average person, primarily you have to worry about the criminals. But increasingly our lives are going to be affected by how nation-states act or position to act in the future.

Steve: You know, in the first category, you know, again, because I think we have a pretty technical audience, you know, there's data that's sitting out there in servers. It’s sitting out in large databases. Presumably companies have done a really good job. Universities, by and large, do a really good job of protecting their data through passwords or through other security measures, yet it still seems possible for very sophisticated criminals to access that data. And maybe that's a little different than where I started by, you know, people tricking you into giving them money, you know, there's kind of that piece. I'm a little more interested in how a cybercriminal breaks into some of these places that are holding 250,000,000 records and have, I'm sure, quite tight security. So can you talk about whether that's a false assumption— about the tight security— and then, how do people do that, of course, without revealing a specific means? But, you know, how does all of that happen?

Angelos: Yeah, so one of the interesting things about security is that there is nothing ever fixed in stone. There is no one way in which people break into systems, right? Anything that goes, goes. Having said that, it is the case that the two well, sorry, the three main ways in which people break into systems and steal data or information or engage in other nefarious activities are they are social engineering, and there are two subcategories to that or software vulnerability. So, software vulnerabilities is simply software has some kind of bug that someone found and could exploit it to gain control of the software or the system. In the social engineering side, you have the two versions that you kind of alluded to in the beginning. One is you get these attempts to trick you into logging into a site or otherwise providing your password or for some kind of service. And the second is you're asked to click on an attachment and presumably see some important information. So I would say, I mean, statistics are generally hard to find but— or conclusive statistics— but I would say that probably 90 percent, if not higher, of all the attacks,  and certainly more than that of the attacks that we see in the news happen that way. The two versions of social engineering either, “Give me your credentials,” or, “Here, I sent you something” that may look like a document from a trusted source, you click on it, and it does something bad. So it has some code hidden in it that somehow gets executed. That probably accounts for the majority of problems. But some of the problems that you typically see on servers, so these big issues that we see periodically of somebody broke into a corporate network or a cloud environment where a big company had its data and stole 500,000,000 records or whatever, those typically, but not always, happen because there was a bug or a vulnerability, a flaw in software, and either nobody knew of it or the company that was operating that software did not get around to fix it, and so somebody exploited it and managed to get access to the servers.

Steve: So then the second piece, by virtue of the fact, say, I’m on a website, let's say a bank, clearly I'm entering data— my information, whether it's personal information or anything, there's a back and forth between myself and, not just the website, but myself and the bank. And so by virtue of that, I'm already interacting with their system by virtue of just using the website. And you're saying that if there are potential vulnerabilities in the software that's running that website or the application that's running in the site, that then you might be able to figure out a way to exploit. So that's interesting that there's no credential needed; you're already interacting with their network, with their applications, that you might be smart enough to get your way deeper into the application than you would guess otherwise. So I think I have that one right.

Angelos: Yeah, you're right. And as a concrete example, the Equifax compromise from a couple of years ago followed that pattern. So there was a vulnerability on a public-facing component and they could get access.

Steve: So before we leave kind of the social engineering side, it would be only prudent for us with our audience to talk about best practices for being safe in those environments. And so what are the— are there a handful of rules of thumb, either on the two different social engineering aspects that you talked about? What are the what are the rules of thumb that you would recommend that people follow?

Angelos: Yeah, so the one interesting thing before I answer directly your question, I'll say the one interesting thing about social engineering attacks are that they are easy to use what's called the “spray and pray” bomb, meaning an attacker can do a broad campaign that would look perhaps like a spam, email spam, seeking to convince as many people, but not necessarily any specific person, to give them the credentials or to open an attachment and so on. So there's a baseline, a lower level as it were, of these generic campaigns that don't target you as a person but, of course, if you somehow get compromised that way, then they will act upon the credentials or the access they've acquired.

Now, there's a second level above that, which is campaigns that are a little more targeted, and they may be targeted to a particular company, for example, or to a particular university or to a particular organization. An example of that would be— because I've received a few of those over the past couple of years— emails purporting to be either from the provost or from the chair or from the dean addressed to sort of all the faculty, and so those are a little more tailored, but still kind of broad.

And then there's a third level above that, which is— the lower levels, you call them, “spearfishing;” this one is sometimes referred to as the “whale hunting” or “whale fishing,” meaning you go after a specific target that is of high value, right, so the company's CEO or the person in charge of the financial accounts. So you see different levels of sophistication across the three different campaigns.

So the lowest level one that is sent indiscriminately, almost typically is fairly easy to tell that something is funny simply by looking either to what it's pointing at, like it wants you to click on something taking you to a website, for example. Well, is the website the legitimate website? Or it's asking you to open an attachment for which there's not a whole lot of justification given— “We’re IRS, and we're sending you a spreadsheet that describes your tax due.” Well, a lot of people will actually get worried about this, and that's what this plays off. Honestly, for that kind of attack, it's mostly common sense, but I realize that oftentimes we work under stress, we're tired, whatever you receive at the wrong time, the email. So I would say the two easiest things to do, one, for any sensitive service that you use, please turn on two-factor authentication; so that is a password is not sufficient to get you logged on, but you need something else. And the “something else” sometimes is a phone or a specific application on the phone or, for the higher security or higher importance websites, perhaps a dedicated hardware token, a little device specific to your bank, for example. So that goes a long way to mitigate some of these attacks.

Clicking on attachments is really hard to get people not to do because that's how we work; we send files to each other all the time. But consider the source. And most of the emails that you'll receive, obviously they shouldn't be sending you any attachments from strangers that you need to open.

If you go to the second level, that becomes harder because now the attacks are tailored and the source from which these attacks come are plausible, at least. So if the dean sends me a PDF and asks a realistic question, I'd be more inclined to open that— so, again, some degree of common sense. And then things like, well, don't double click the attachment statement first. Look at it on the disk, and then if you decide it makes sense, open it because often times once you save it on a disk, it looks what it really is, as opposed to what appears to be on the email. And there are applications, there are services you can use to vet attachments. But it requires mostly a conscious decision by the user.

In certain cases, I use a— speaking for myself— I use a burn laptop, one that I don't really care about. I'll move an attachment there and open it if I really need to open it, but I'm not 100-percent sure.

Now, the third level of attacks is really hard to deal with because now we're talking about an attack that took the time to know you, and took the time to know the environment in which you operate and how you operate. As an example— and it's not a perfect example— a couple of years ago a New York Times article came out that talked about how the North Koreans— or allegedly the North Koreans— compromised the bank in Bangladesh, the central bank, and stole $80 million from the central bank account. That group stayed in that network watching two machines that were used for doing these transactions for over a month, and they were looking at how the users were using legitimately the application before they went through with their action.

Steve: And then impersonated, had the credentials of one of those users so that the system recognized him as a legitimate user and a legitimate action?

Angelos: Absolutely. And the only reason they got caught is they made a mistake in one of the transactions. Otherwise they would have gotten away with it and with more money.

Steve: Are there are other kind of emerging areas that people are particularly worried about that we don't think about? I mean, maybe not my refrigerator, even though my refrigerator is going to be on the internet. What are the—are there areas that people are talking about that the public should know about?

Angelos: Well, it also depends on the immediacy of results and the general goal of the perpetrators, as it were. So, for example, I think I recall a couple of articles a few years ago about allegedly Iranian hackers targeting, or trying to target, water dams. And I think there was one article I remember, that there had been a serious attempt— and I don't recall how successful it was— about a water reservoir in New York State. So, you know, water causing damage there is not perhaps as immediately dramatic, right— the lights aren't going to go off; the water isn't necessarily going to turn off the taps right away, but people worry about that sort of thing. And so if it comes out that suddenly the water supply for Atlanta or New York City or something has been affected,  the monetary damage may not be there; that human damage may not be there, but the sort of concern might be there. And in certain cases, the reason to undertake such actions may not be for the damage, but for the sort of eroding; you send a message and erode the will to do something. So in the middle of political negotiations, in the middle of a crisis, the draw-down on the East China Sea, wherever the crisis might be, that can be used as a sort of reminder that there are assets at risk. So really, anything that you can imagine that either can be affected or could be seem to be affected matters. So gas, oil, the healthcare system, if somehow all the electronic records went down for a week or two— and it doesn't have to be for everybody, but for some of the major healthcare suppliers— if they all disappear, that suddenly starts to have an impact on people's lives. And, you know, that can be leveraged for political purposes, I mean, by other nations.

Steve: Well, I know that you’ve come to Georgia Tech just in the last couple of years from a senior level position at DARPA, the Defense Advanced Research Projects Agency within the U.S. government and were a professor before then. And so you've kind of seen academia then research community in the U.S. Defense Department and now back to academia, so I'm really curious about how those— we talked to an awful lot about the various threats, about the various scenarios— I'm really curious about both a little bit about your path and why that path and really how it informs the work that you're doing in your lab today.

Angelos: Yeah, it's, I mean, even for myself, looking back at it, it's sort of an unexpected path in certain ways. It started not very unusual for every, or most or many grad students finish and they decide the academic life appeals to them. I was doing research in security and I continued to do research in security and, you know, for the most part that meant trying to build better defenses and the information that I had about what the bad guys were doing or what's in the news or anything that I could find out through my research. At some point, I decided that wasn't enough and that point came really at the same time as DARPA coming to me and saying, “Hey, would you consider doing a tour, a temporary tour of duty with DARPA,” which is not uncommon for DARPA people that have been involved in DARPA projects to then come and push a vision of their own in terms of the research that they want to be done. And so it was mostly for my personal education, as it were. I wanted to know, OK, is there anything really that I don't know? And of course there is, but what is it that I don't know? And so DARPA offered me a fantastic opportunity to see what's happening at the nation-state level, both on our side and what we see about adversaries doing. And that was, in some ways, not surprising; in some ways, shocking. It was interesting to see the amount of activity, the intensity of activity, and the dictation of all sides to what they were doing. And the fact that, so we think of armies and we think of DOD as you know, other than the active engagements that we have in, say, I don't know, Syria or we have in Iraq and Afghanistan, you know, most of everybody else is standing down their training; they're maintaining, and so on. It's kind of probably like a “be ready” posture. Not so for the cyber side. Everybody is actually working, maybe not to the extent that they would be working if there was an actual war going on, but not that far off either. And so it was interesting to see that and to see the problems that arise from operating in an environment like this, which, as you can imagine, if you take the problems that we see, or at least sort of hypothesize about the problems that exist in the private sector and we see the effects of them— the compromises and the social engineering attacks and all that and data being stolen and money being stolen and so on— you can multiply those both as actual things happening, but also as potential bad things happening. And so my sort of outlook of things on the field, I would say changed because of scientific insights; it changed by looking at what is going on— the scale of things— and kind of a recognition that the path that we had been taking may have been satisfactory from a researcher point of view, like I'm very proud of the work that they did. But ultimately, the impact that these things had was either too long or just not there. A lot of the work that we did was great academic work. I didn't make much of a difference at the end of the day for a variety of reasons.

And so while I was at DARPA I tried to change part of the direction of the field, as it were. That's the one great thing— you hold the purse strings; you can change or you can steer at least what people find interesting problems. And the one thing there that I'm very proud of is that I was able to do this with unclassified program. Because when you start talking about nation-states and the problems of nation-states, very quickly you end up in a classified world. So I'm proud and happy that I managed to, not only to fund almost half a billion worth of research, but also to do a lot of it as much in the open as possible— so funding universities, small businesses, big businesses, but in the unclassified domain, which meant that there are also scientific improvements were being done there. So, having done all that and my tour coming to an end, I looked at what I wanted to do, and I decided that academia, at least in the immediate future or in the next 10 years at that time, was probably where I wanted to be again, both for personal reasons, but also for the freedom that it gave me to pursue the things that I wanted to pursue. But then, the work that I want to pursue is tied to the insights I've had, obviously, from what I found out in my years with [indistinct].

Steve: One of the things that you had mentioned before is there's not enough of you out there doing this kind of work. So I think that means, you know, cybersecurity-trained engineers or network security engineers, all of this. And can you talk about the need broadly? We keep hearing about that. What's the need broadly for that and what we should be doing to prepare students, at least even partially, even if they don't take that as a career, even partially to be prepared for some of the threats that you talked about?

Angelos: Yeah, so broadly I see two different kind of needs here. One is training the engineers that are going to be doing regular engineering things like they're going to be building systems, writing software, the things that engineers have always been doing and are going to be doing, but knowing what exists, because, you know, honestly, they're the ones that need to be taking advantage of the features, using the tools that are going to allow systems to be designed and built inherently more secure. Then there is the need for the specialists. And the specialists are the ones that are going to build more of these capabilities or are going to architect a bigger system that brings together capabilities— security capabilities, I mean— from different components and from different subsidiary systems and result in a more secure architecture. The former means we need sort of to pervade security into our curriculum and, of course, that's hard because I know certain things, but I'm not an expert in micro transistors, so what can I tell to somebody doing a hardware design course about security at that level other than the general sort of, here is security privilege? And then for the specialists, the challenge is that, because security is a transverse discipline— it cuts across a whole lot of different verticals in knowledge— it means that for somebody to really get into it, they have to have a good knowledge of at least some system component, whether it's you start from networking; you start from software; you start from architecture, whatever it is, so you that you built with security on kind of on top of it that you learned, or at the very least in parallel with it. And so neither of those things— well, the first is kind of easier conceptually to do at scale, but it's still challenging because you need to find the right messaging per course and student. The latter is very hard because it means requires a really sustained investment of time by the people. And that means right now, the best way we can do it is really get somebody through a master's program or even beyond. So an undergrad degree, kind of hard to cram everything in.

Steve: Well, I know that I think it's one of the service academies, maybe it's I guess the Naval Academy or West Point now have a required— and it makes sense— now have a required cybersecurity course maybe for every student, or at least every engineer. And I think, you know, those are the kind of things that we occasionally kick around because I think, you know, I'll be very honest, you know, I see cyber threats in the same vein as other existential threats. And I really am not using that word lightly, you know, around climate change and other kinds of things where they're really true threats. Just like, as you pointed out, like nuclear threats, we seem to have worked our way through, you know, some, much of the nuclear threat. Obviously, that's still out there. It's not the prominence that it had at once, but it's certainly out there. But I certainly see cyber threats in that same category, like the mutually assured destruction kinds of scenarios. I think it is super important. And we really do need, I think, we need to find a way for at least every engineer or even broader than that to have some exposure to them.

Angelos: So I'll just add to what you said. Nuclear weapons have a high threshold for use. Basically, they've never been used in anger since, what was it now, 75 years ago, because everybody knows the consequences. That's not the case with cyber. The consequences may not right now, at least, be quite as severe or nowhere near as severe, except for perhaps contrived scenarios people might think for movie purposes. But those consequences are going, are increasing, for the potential impact. But the threshold, do these things we see is very low, and so my  fear, my worry, is either we sleepwalk into a situation where that is generally accepted, so we have an increasing level of somewhat destructive attacks happening, but nobody, but they don't cross the threshold of actually doing anything about it because everybody does it. Or that we get to the point where they are such attacks are still viewed as acceptable, especially compared with armed conflict like through kinetic conflict or, God forbid, use, but we haven't recognized that now, right, something that we did, that we could do in 2015 or 2010, and it had a very localized, minimal impact, we do it in 2030, and it affects the whole country and it shuts it down, and that might catch people by surprise. Nobody knows what the impact of some of these things is going to be.

Steve: You're here at Georgia Tech after a stint at DARPA, and it sounds like you've been, you know, very well-informed about the kinds of problems that you'd like to work on in an even more independent and open environment of the university. I really would love to hear a little bit about some of your projects, what your group looks like, you know, the kinds of things you're interested in, the kinds of things you'd like to explore.

 

Angelos: Yeah, so, upon arriving here, we formed— a colleague and a good friend at this point, Manos Antonakakis— and I decided that our research interests align very well. and I knew Manos through some of his work in DARPA projects, DARPA programs. We decided to form a new center to pursue at least much of the work that I want to do sort of in a joint fashion. So that's the Center for Cyber Operations Inquiry and Unconventional Sensing, which is, in fact it's a little bit of a double play. The acronym is COEUS, which is the Greek titan for intelligence. And  so it's the C2 Center. It's the COEUS center, right, so the intelligence, and plays a little bit on this notion that we're looking at things in the kind of that are nation-state significant, and so hence intelligence. So a lot of what we're looking at is the intersection of big data-type of operations and security. And so the reason for that is, at least in part, because when you're looking to have impact and to get this visibility on what is happening in the world, even if you then need to narrow down in a particular problem, you need to know what is everybody doing, what is happening in the world, and how can you translate micro-knowledge to specific knowledge for specific problems. So there's a thread of work that has to do with big data, big network data analysis, and that has systems problems. How do you deal with petabytes of data? Has algorithmic background. What do you do once you have that data and you can process it correctly? But then it ties into the security side very strongly. So that's the motivation of, well, what kinds of data are you going to look into and why are you looking? So that is driven by knowledge of how our adversaries work, how we work, how criminals work, and how the world works. There's a lot of data out there. It doesn't mean all of it is useful and interesting. And then taking that knowledge and the insights that come from having the data analytics and the sort of domain knowledge and finding interesting verticals, as it were, where you can apply the knowledge in specific environments. So a project that we are just about to start was awarded a couple of months ago is one for security for 5G environments, infrastructure systems. And so there, as an example, we're looking at what kind of capabilities could be deployed in 5G networks, infrastructures, but also on end devices, potentially a phone, a 5G phone, that would enable, to begin with, in the context of, say, a DOD network, would enable defenders to protect, to defend, the network more efficiently than they do now. And that then breaks down into, well, what are the ping points that they encounter? And that comes from domain expertise and continued interactions. But then there's a well, what kind of technologies could we develop? What could we put in the routers? What could we put in data centers? What could we put into phones that would allow us to better, faster, or easily find the bad guys, remove them and so on?

Steve: And so my brain is just saying, like, “Oh, my gosh! They just decided on 5G. Isn't it all secure?” You know, we're just about to launch all that, and by the existence of your project, you're saying, you know, because you talked both about the end users, i.e. the phones and all that and then the network, and so I think I guess the answer is, well, there's a lot of work that needs to be done that maybe could have been done. Well, I hope you got the sense that I could keep talking about this stuff. It's been absolutely fascinating. There's so many aspects and pieces to it. And you've had a really fantastic and interesting career, and we're really lucky to have you here on campus. I know we get to see each other off and on. And I really can't thank you enough for everything you're doing for our students, for the research community in Georgia Tech, and can't wait to see the amazing things that are going to come out of your research efforts. So thanks, Angelos, very much for coming here today on The Uncommon Engineer.

Angelos: Steve, thank you for having me both on the podcast and at Georgia Tech, since you were the one that hired me.

Steve: OK, well take care, and we'll see you around campus.

[marching band music]

Geekout

Audio & Captions

Download Audio File

Transcript

[big band swing music]

Man: [archival recording] On the Internet, it is easy for a criminal to create a fictitious identity to perpetrate frauds, extortions, and other crimes.

[suspenseful music]

Angelos Keromytisz: Yes, so historically, the use or abuse of the internet for national means had been confined to intelligence collection of some kind or another, so run-of-the-mill to finding secrets, people being careless or keeping an eye on people. But increasingly because all sorts of critical infrastructure and emerging critical infrastructure is being connected to the internet for a number of reasons— economic and social, political, whatever— that means, that those critical infrastructures can be held at risk. And once you can hold something at risk, you have leverage as a nation-state over the entity whose critical infrastructure you hold that person. So in some ways it's not really any different than having armies and you build up the armies because you can threaten the opponent. And even if you're seeking peace, you still build up an army so that you deter the opponent from attacking you or pushing you around. The whole nuclear détente,  the MAD doctrine, mutually assured destruction, whether you believe in it or not, the strategy certainly believed in that and the politicians did. And so they— both sides or all sides, I should say— have nukes, build them up so that they deter others from using. The same holds for cyber to the extent that these critical infrastructures are online and can be affected, not just by stealing information, but by manipulating some physical aspect of that infrastructure behind the internet-facing side of it. That means that they can be used either to deter or for some kind of response that is less than kinetic, or simply send a political message, again, without necessarily directly causing state casualties, human lives to be lost or anything of that sort.

Steve McLaughlin: And so the example that as you were talking, the example that keeps running around my head is, you know, the electric power grid, right? The electric— you know, this is the one that you hear, at least we hear about the most vulnerability of the electric power grid. It's in the news all the time. You know, so far, we haven't had a big, huge outages or maybe we're not aware of them, but that's the kind of vulnerability you're talking about that ultimately will affect consumers, right, you know, attacks on other kinds of infrastructures that we don't know say see, but certainly if someone was able to hack into to the power company and shut down all the nuclear power plants on the East Coast or something like that, obviously that will have a big— I think that's the kind of nation-state-like action that would have a direct impact on us.

Angelos: Yeah, you're absolutely right. That's perhaps the most dramatic. And I think I've seen at least a couple of movies or TV series that— but really, the same holds for any other kind of infrastructure that can be affected, and the truth of the matter is almost everything is computerized and almost everything is or is about to become connected for convenience or efficiency, and so that opens up the scope for what can be done.

Steve: And so what are the kinds of security threats that people are talking about in 5G that are new or different? Because I think people are somewhat used to the security threats of their existing phones, but there must be new stuff in 5G and so I'd love to hear what that might be. Or maybe it's just security as a whole or there's things that are 5G-unique?

Angelos: Well, so 5G is really offers a specific set of standards and this big amorphous marketing entity that people and companies project whatever they want to on, right? But at the end of the day, what has been defined are things like the radio access interface— how does your phone talk to a cell tower, the protocols for that, the security for that, and then certain aspects of what kinds of services are expected to reside within the infrastructure and are offered to the users. Now, beyond that, many of those specifics are not necessarily all tied down. And that doesn't mean that's all there is to have about 5G. So having said that, I would think of what we're doing less as a trying to fix problems with 5G as we know it, and more as looking at 5G as a tabula rasa, as a clean slate with certain features that were already anticipated by the standards and by the whole sort of ecosystem, the ISP ecosystem, and can we use those to do security better in that environment? And that is motivated both because we want to do better, but also because the expected use of 5G in terms of what kinds of applications and pervasiveness and so on is going to explode much more than the current cell phones are used.

Steve: Because I'm really curious on your perspective of, you know, security designed into systems from the very beginning, because it sounds even the way that security has evolved over 20, 30, 50 years is we build a piece of hardware and we connect them, and they do things and then, oh, yeah, that's right, we need to make it secure, so we overlay it. And it seems like traditionally systems have been designed that way, although maybe you kind of hinted in 5G that maybe there was some, you know, inherent security designed in. And I'm wondering, is that a—  what's your perspective on that? I mean, the idea that we might design security at the transistor-level, and then we would design the security at the chip level, and then we would design security at the— and we kind of do it at each layer so that it's kind of like inherently part of the system. So I'm curious about your perspective on that and how that is evolving as security as an inherent piece. Just like transistors and electrons and wires are inherent to the operational systems, shouldn't security be that as well?

Angelos: Yeah, so traditionally security wasn't designed in also because of perceived or real complexity, one; two, lack of tools; and three, lack of motivation to put it in. I mean, honestly, why would a hardware manufacturer put in security features if nobody's going to pay for them? Even if the end users wanted them, if they're not going to pay for them, why would they spend, you know, make the investment? So those things have changed over time and they keep changing, meaning certainly people, to a larger-than-before extent, are willing to pay for security features. Now, how much and how effective, right? So what the cost-benefit ratio there is all up for debate. But increasingly in the past 10 and even more so, maybe the past five years, we've seen a lot of push at all the layers starting from sort of transistors, hardware, chips, low-level software, high-level software, all the way as high up as we can do it to have tools and design patterns that the software engineers, the system engineers, can easily take advantage of because that's when these things will happen. If you think of security as the idea of pixie dust that is sprinkled over a system once it's been designed, or as what needs to happen by bringing the security people by themselves to look at and add the things, then you're going to lose. You're going to lose because there aren't enough of us security people, and really we get in the way, and we slow things down because all kinds of objections. So unless the default of output of a design and development and maintenance process of a system, software, hardware or whatever, is to be, you know, as secure as we can make it— there's no such thing as perfect— as secure as we can make it, we're never going to really get there. Now, personally, I'm a little pessimistic about the prospect of shutting out the vast majority of attackers, but that's not to say that we can't raise the bar enough that, if you remember the example of the first-tier, second-tier, third-tier kind of adversary sophistication, we can actually save the first tier out, and then we can focus resources on the more sophisticated and fewer bad guys.

Steve: I really, your comment about designing security, and I'll take one step further, because I'm proud— I programmed a microprocessor at one time, right— and so I think what you're describing is, whether it's even at the instruction set level or whatever, you know, having either designing at the very, very finest grained-level— a transistor or a circuit— you're saying— I think you're saying things are very likely to evolve where tools for the designers will be at their disposal should they choose to use them, right? And so, for some applications in some microprocessors, so whatever, no, it is not needed and, for others, that inherent security will be needed at the beginning, and so it will be an add-on, but it'll be an add-on at a much, much lower level for which designers will have the ability to turn on or not. Is that how you see it? I mean, are we there today, or do you see that's kind of where inherent security is headed?

Angelos: Well, so you're right, and I think there are certain features that exist even on commodity processors or processors that are coming out, and there's work, for example, on goes into the latest or one of the latest ARM processors on this capability scheme that has at least 20 years of research, actually probably close to 30 years, of research behind it, and finally, it's baked into hardware. Now, so it took 30 years to get that feature that is considered kind of a really important way of separating, of keeping data, bad data from good data or bad code from good code, right, so at the really fundamental level, keeping things apart and minimizing exposure within a system. The question is, well, how long is it going to take now the operating system vendors to actually use those? And how long is it going to take for APIs or primitives to be exposed to the application developers along until the compilers take advantage of them and so on? Because some of these attacks, it doesn't do you much good to have something in the hardware feature because the attack happens at a much higher level. So there are many attacks that stay within the browser, for example. It won't do you a whole lot of good against these attacks. We have a secure processor. An insecure processor, is going to be just as good. If you want to take it a step further, we seem to use a lot about what are called influence operations that really reach out through cyberspace, to impact the views of people to manipulate the views of people, which is actually part of cyber writ large. Well, it doesn't matter how secure your processor in your system or your network is, the attacks that happen sort of at a different sematic level. So, as Terry Pratchett or to paraphrase Terry Pratchett, one of my favorite authors, it's turtles and elephants all the way down, referring to Discworld— really fun series of fiction books. So that is to say, yes, we're going to see things in hardware and we are seeing things hardware, but there are primitives that need to be built in higher levels.

Steve: So one of the things we always talk about on The Uncommon Engineer is what's your path? How did you find your way to the career that you have now? Because a lot of our audience is, you know, junior high, high school students, even Georgia Tech students that are really curious about your path and some of your passions, so how did you decide to do what you're doing?

Angelos: Yeah, that's a great question. And looking back, there's probably a number of different points. I'd say the two earliest— I was in high school and one of my good friends at the time had just been admitted to university, and he invited me to visit him. And they had the computer terminal room, which I had a PC at home, but I didn't have one of those Unix systems. And so he said, hey, play with it just to see something different. And so very quickly, I found an administrative application, which however, did not allow me to go deeper, and so I spent the next hour finding a way to bypass it, and I was successful having never seen that system. And I said this, hah, this is fun, sort of bypassing security mechanisms. I always wanted to do computer security, I mean, so computer science, computers at the time. That kind of nudged me in the direction of security. And then in my senior year, I think it was my senior year in college after I had been dabbling in security, I had a professor who has since become a really close friend and a close family friend who came to teach at the University of Crete. He was a professor. He had just received his Ph.D. from Columbia University, and he came back to Greece to teach for a couple of years. And he taught security— I'm sorry— a course on secure system design or something to that effect, and I just fell in love with that class. We then did a project together that ended up becoming a standard for [indistinct], and then it kind of went from there. He said you should do grad school, introduced to my advisor, and off of it went. It kind of, at this point, I feel like I was in a specific path.

[marching band music]