AI, machine learning, Big Data.... It’s everywhere. But what’s the difference between it all? And further, what does it mean in our world today?

Part one of our AI series introduces Justin Romberg, here to provide a primer on artificial intelligence (AI). He’s a professor and researcher in the School of Electrical and Computer Engineering. And, he’s published eight papers in 2019, alone.

Album art, illustration
Host

Steve W. McLaughlin

Georgia Institute of Technology
Provost, Georgia Institute of Technology
Professor, Electrical and Computer Engineering
Steve McLaughlin, headshot
Guest

Justin Romberg

School of Electrical and Computer Engineering
Schlumberger Professor
Justin Romberg, headshot

Audio

Audio & Captions

Download Audio File

Transcript

Woman: We're going to go on an excursion into artificial intelligence.

Steve McLaughlin: AI, machine learning, Big Data—it's everywhere. But what's the difference between it all and, further, what does it mean for our world today?

Welcome to Part One of our AI series.

[steam whistle]

[applause]

[marching band music]

I'm Steve McLaughlin, dean of the Georgia Tech College of Engineering, and this is The Uncommon Engineer.

Man: [archival recording] We’re just absolutely pleased as punch to have you with us. Please say a few words.

[applause]

Steve McLaughlin: Today our guest is Professor Justin Romberg here to give us a primer on AI. He's a professor and researcher in the School of Electrical and Computer Engineering here at Georgia Tech. Welcome to the program, Justin.

Justin Romberg: It's great to be here. I think this is the first podcast I've ever done.

Steve: Awesome! So you can't go—it seems like an hour—with someone not talking about artificial intelligence, AI, machine learning, Big Data. And because so many of our listeners are high schoolers or folks that aren't engineers or mathematicians, can you say a little bit about, from a high level, what artificial intelligence is and what it means to us?

Justin: Sure. I mean I guess it's a little tricky to define as the word is—the definition of the word is sort of in flux constantly, especially these days with it being such a hot topic. I think when it was, you know, the thought of artificial intelligence was first conceived, maybe it was by Alan Turing when he was laying down sort of the modern—the fundamentals of modern computation. Maybe it was, you know, when a machine can fool somebody into thinking that they're human, right, and the way he sort of posited this was if you're having a conversation with someone who's on the other side of a screen, you ask them questions, they reply through the monitor if you can you know determine whether or not the person on the other side is a human or not. And you know, that kind of definition continued on, I think, for the next few decades where it was basically like, you know, what researchers were looking for was some kind of like emergent intelligence that would somehow like, you know, act human in a human way. I don't think that's quite what people mean by it right now.

Steve: One of the things that you hear often coupled with AI is machine learning. And so whereas I think the general public hears about AI all the time—maybe less so, machine learning—but in our communities, people kind of use those two terms together. Can you say a little bit more about machine learning as it relates to AI or what's the difference?

Justin: Sure, yeah. Some people use that—people use these words in totally different ways. So many people think machine learning is a subset of artificial intelligence. I don't quite believe that. So I would say machine learning is sort of a way to come up with an algorithm that's data-driven. So I guess in like classical statistics which machine learning kind of builds on, you have this idea of, OK, I have this bunch of data and I'm sort of a handcrafted decision rule about how to make a decision. Machine learning, kind of the philosophy machine learning is to throw all of that to the side and just try to learn the rules automatically. And you know that it has been tremendously successful again in areas where, you know, two things are true—one is there is a lot of data to learn from; that's one of the differences between machine learning and human learning—it actually takes many, many examples for a machine to learn something useful. And the second place it’s used for is places where maybe we didn't have great models before. So you know, we have great models like how the planets move, even fluid flow, like other physical systems, you know? We've had great fantastic differential equations for 200 years, but for doing things like figuring out or describing what a sailboat is or what a fire hydrant looks at, you know there are heuristics for that in the image processing and computer vision communities, but there weren't any—and they worked well enough, right? But really letting the sort of machine figure that out all on their own, that was really a watershed in those fields for things where you didn't really know how to describe things in mathematical terms, but you were kind of trying to anyway.

Steve: You know, when you were talking about Alan Turing, so that's you know 40, 50 years ago, you know, it brought to mind the computer HAL.

HAL:  Well I don't think there is any question about it. It can only be attributed to human error

Man: Hello, Hal, do you read me? Do you read me, HAL?

Steve: That's what many people view artificial intelligence in that sense, like, oh my goodness this computer is going to take over the world! But, like you said, that's been around for decades and then things kind of went quiet until, whenever—five or eight or ten years ago—when people started talking about artificial intelligence. What do you think is it that caused that resurgence either in the research area, or why now? Why is it all of the sudden hot?

Justin: I think there are two kind of well-known reasons for why now. And it's sort of two things that are at the core of any artificial intelligence algorithm and that's data and computation, right? I mean our ability to compute things, you know, that power has gone up exponentially for decades now, too. And then I think what's special in the last couple of decades is our ability to collect data digitally. And so all these, you know, tremendous advances in artificial intelligence being able to automatically recognize and describe things in a scene off a picture from a camera, or to be able to take like text transcriptions and extract some kind of semantic meaning from  them—all the great advances that have happened in that over the past 10 to 15 years are really in fields like those where there has been a tremendous amount of data available.

Steve: You know, and one of the things that you hear an awful lot about is, you know, the singularity—the point at which a computer would be so powerful and have such capability that we would lose control.

Justin: That's right.

Steve: Talk about that. Do you worry about these things? Should the public worry about these things? Or, where are we on that kind of singularity event that may or may not occur?

Justin: Right. I mean I think first of all by its very nature, we might not know when that's occurring until maybe it's too late. I mean, I personally—like of all the things I worry about in terms of application of artificial intelligence, that's really not on the list. I have kind of more mundane but practical concerns about people losing jobs being replaced by autonomy. You know, all this talk of self-driving cars we had earlier, the number one occupation for males worldwide is driver right now. So there are massive displacements that might occur. Again, we don't know. No one knows exactly what's going to happen. So and there seems to be a small portion say of are people in politics that seem to recognize this and maybe be prepared or at least prepare society for what might happen when this happens. So my meetings are run more along that line in terms of effect on the world economy and human well-being. That's not to say things like the singularity and the machines taking over. They're certainly not impossible in theory or practice, but it's just not something that crosses my radar on a daily basis.

Steve: You know, one of the ways that people listening might be experiencing machine learning or things like it that we've been talking about is, you know, the infamous “I'm not a robot. Here's nine pictures. Click on all the pictures that have fire hydrants in them.” So can you talk a little bit about you know how people might relate to that experience and AI and machine learning?

Justin: That's right. I would say those are great examples of actually the failures of artificial intelligence and how we can capitalize on them. So you know, the reason it knows you're not a robot is you are solving a problem that is not easy for a computer to solve right now. And so you verify that by solving an image recognition problem that the state of the algorithms just can't untangle. You know what’s super interesting though is oftentimes when you do that, that gets fed back has a training sample and might actually help artificial intelligence in the future make better decisions.

Steve: Current limits of artificial intelligence is it can't identify, and so it needs the human to do that. But over time, someone will accumulate the decisions you made and when it sees that image again, will be able to automatically do the work that you do. Isn't that the whole point of machine learning?

Justin: That’s right. There is a constant arms race on right now with these failures of artificial intelligence. I mean, and part of this is, I guess, there is a whole sort of subfield now of like adversarial machine learning or adversarial intelligence where, you know, you can have a picture and it very reliably tells you about the fire hydrant in the picture, and then you make just a very extremely small perturbation to all of the pixels, right—a perturbation where if you use the classical model would make absolutely no change in what happened next. But because you have these sort of massively cascading representations that are very rich, it makes an error. You could actually make it classify any number of different things by adding these small perturbations. Now that perturbation has to be very carefully chosen. Itself takes a lot of computation to figure out what that should be. They tend not to happen in practical situations. We don't know though of what might be happening and what not, but it's raising all kinds of interesting questions about, you know, what we're expecting from mission-critical algorithms in terms of performance guarantees.

 

Steve: So if you ask me, that's reassuring that these algorithms that we're using for machine learning and artificial intelligence are robust as long as there's not somebody sneakily behind the scenes, an adversary, trying to mess things up that these algorithms when crunching on enormous amounts of data do the right thing.

You know one of the other things that I think a lot of people experience—it’s not at my home yet—we're resisting the home assistants like Alexa. And so you know, I'm really curious your  thought on, you know, how those systems work today; how they're going to work in the future; how much it plays into what is we're talking about artificial intelligence. And I guess the real question is is Alexa really listening? Anything you can do to provide insight on that will be—no, I’m serious. I think—can you share more about what those are the kinds of things people are experiencing?

Justin: That's right. So Alexa is in my home. My kids love her. I have 10-year-old twins. I can't get her to do fundamentally useful things like never play a Taylor Swift song again. She was under direct orders but somehow she still comes on. But to the point, is like is Alexa always listening? In a way, yes, right? Whenever you say the word “Alexa” or whatever the key word you programmed, it wakes up, so you know that it's listening. And do we know exactly how much the Amazon devices are listening, processing? Well we don't know. We only hear rumors, right, only a couple—the software engineers and the design people [inaudible]  really know what's going on. And that I think brings us to another kind of like fundamental tradeoff. It even goes past artificial intelligence is like, what are we willing to trade some degree of privacy for, right? I mean I think most of our concerns with people recording us in our house, they're mostly philosophical and not practical. We talk about mundane things. And yet it is still seems wrong fundamentally somehow at least to people my age, right? And so we feel like we're giving up something in principle. There could be dramatic things we're giving up in practice, maybe we just don't quite know what they are yet. And it also feels like it's a decision that, you know, we did not quite make consciously moving forward. And so I think it's great that topics like this are starting to get broad discussion because I think it's a very important philosophical topic about where we stand as a society.

Steve: That's a really interesting point because so many of our systems whether they're communication systems, whether they're computing systems, were designed to just make things work—to do the job that we were envisioning them to do as fast as they can, as accurately as they can, without regard for security, without regard for privacy. These pieces that are now becoming so much more important and it's great to hear that people are starting to kind of—it's not just about being right; it's not just to be about efficient. It's also being about some pieces need to be private or some people want to be private. And is there a way to describe more about how those systems would ultimately be designed? Because you think the way that you're describing it is, hey, we have lots of data. We've got machine learning algorithms and AI that come up with a solution. How would we then say, “No, this thing also needs to be private,” or, “It needs to be secure.”

Justin: Right. I mean to be honest I think the best solution maybe is not in the realm of engineering it's really in—it's sort of the realm of transparency or like having very clear guidelines about what companies or the government can do with data like this and having to demonstrate that this is exactly what they're doing. I think having very clear expectations and policy about these aspects, I think that would be the most important thing because you're right like when you're trying to solve an interesting problem or trying to make money, that's the goal. And you have to take over parts of other people's lives. Take something away from their privacy—that's a price other people are willing to pay.

Steve: So some of our listeners, probably high school students or college students, what would you say if they wanted to get into machine learning, AI, big data and let's say they're an engineer or not an engineer—what would you—what advice would you give to them? This obviously an extraordinarily exciting field. So much of our future is going to be oriented around it. What advice would you give them on how to prepare them for a career in this space?

Justin: OK I will give one very specific piece of advice which is what we were sort of hinting at earlier and that is study applied mathematics, I mean if you want to know really what happens in machine learning and artificial intelligence algorithms. If you know probability and statistics and linear algebra, like these are the tools that kind of make modern-day data science possible. And not only that, it's like you don't know what the interesting problem set will be ten years from now. Like I, like you, Steve, was not raised in this current environment of artificial intelligence and machine learning, right? And yet, I was trained with the right mathematical background to completely appreciate and contribute to these things now that they've revealed their importance.

Steve: And so as an engineer, I'm heartened by the fact you didn't say learn coding and use the Google or Microsoft suite for artificial intelligence. But I think that that's really important for folks to hear. For the students that are in your research group, what kind of backgrounds do those students have if they wanted to—I mean because you're doing research at the cutting edge of this space, so very, very mathematical very advanced—what kind of background would a student have to come in to do research in your group?

Justin: So most of my students are very strong in mathematics. So a large component of my personal research is really on the theory side of things. I still write papers that have theorems in them. But I do also, like, I do, you know, as an engineering professor at Georgia Tech I of course very much value things being put into action. So I do have a part of my research program which is actually really applied. And I have students when they first come in they know what they want to do; they want to do some theory. But I have students that come in thinking they want to do one thing, and their thesis ends up going in completely the other direction. I've seen it go both ways. I’ve had people that were really interested in theory ended up you know doing something like coming up with a reinforcement on a controller for a micro circuit. And then I had other students that came in who are great coders, fantastic, started them off with some computer vision stuff and they ended up like, you know, proving theorems about algebraic structures of different types of matrices. So you never know. And they really don't know until they get going. But in general, I mean you can't go wrong, again, by having extremely strong background in applied mathematics.
Steve: One of the things that we always ask our guests here on The Uncommon Engineer is, Justin, what makes you an uncommon engineer?

Justin: I can tell you some ways that Georgia Tech has changed me, and it's actually related to what I just said. When I came here, I was very focused on mathematics. I wasn't, as I just described it all, I wanted everything on a firm mathematical footing, otherwise I didn't feel like I could proceed or understand something. When I came to Georgia Tech—this is just a huge place full of people doing all kinds of interesting things—that I actually started to talk to people that built imaging systems; people that built distributed processing systems; and struck up meaningful collaborations with them. You know, a decent percentage of my paper output right now is collaborations with other faculty at ECE doing things like figuring out how distributed optimization algorithms might be applied to doing comp computing with multiple cores; figuring out how some like abstract math we figured out for solving a certain type of system of equations could be applied in underwater acoustics to track ships. So that I really sort of value that work too and always devote a certain portion of my time to. I find that a valuable piece of my identity at least.

Steve: Well, we just want to thank you for coming on The Uncommon Engineer today, Justin. We're extraordinarily fortunate to have you here on campus—folks like you that really are committed to good solid mathematics, but also those applications. And I know that you care tons for your students and so we're lucky to have you here and thanks for coming on today.

Justin: Thanks, Steve. It's a pleasure to be here.

Steve: Tune in next month for Part Two of our AI series, “All About Bias and Fairness in Algorithms.” That's all for now from The Uncommon Engineer. I'm Steve McLaughlin, and thanks for listening.

[marching band music]

In Absentia

We found no additional GEEKOUT content for this episode.