Machines are starting to think, remember and act in our physical space. While these technologies are going to do a lot of good, the shiny future they were supposed to herald might turn out to be a dystopia, says ‘chief cyber diplomat’ Dr. R. David Edelman.

Dr. Edelman was a keynote speaker at a 2018 conference in Toronto held by the Canadian Media Directors’ Council (CMDC), a series of talks centered broadly around the theme of Truth and Purpose.

At his day job, Dr. Edelman oversees efforts at the Massachusetts Institute of Technology (MIT) to help technologists figure out the public policy implications of their work. Prior to taking on the role, Dr. Edelman served at the White House as an adviser on technology and national security to presidents George W. Bush and Barack Obama.

In those earlier positions, he said he noticed emerging trends and disruptions, caused by technology, that we’re now seeing across the economy and society at large. Artificial intelligence (AI), for example, is doing a lot more than enabling Google to do its job better.

Dr. Edelman provided a tale of two robots to illustrate.

One is an elderly care robot, designed to offset the critical shortage of health-care workers. The other is an office security robot, which appears to be a replacement for relatively plentiful human security guards.

“They are the best of robots and they are the worst of robots,” he explained.

Both of them, he said, could prove to be beneficial, but in the case of the second robot, it will likely displace existing jobs. This type of disruption has the potential to affect the entire work force.
And from an ethical standpoint, high-math and high-technology are a difficult combination for most humans to grasp. What happens if we get the AI programming wrong, or it’s controlled by bad humans? Technologists want to move fast and to break things, but as we get deeper into the world of AI and machine learning, there’s less of a chance for the majority of humans to get involved in the development process.

Dr. Edelman said we need to develop new sensibilities to try to avoid negative applications because, he added, personal, professional and ethical risks are everywhere.

What follows is a transcript of the Q&A that followed the presentation:

Sean Stanleigh: It’s bad humans that make bad robots. How do we, as consumers, as citizens, deal with the fact we don’t have influence over the bad robots?

Dr. R. David Edelman: All the dystopia we’re seeing today is actually a function of ignorance, not malice. If you look at the great freakout … as it pertains to Facebook, for instance, or even in the past year (I don’t want to beat up on Facebook but they happen to be in the news a lot).

By their own admission, a lot of the engineers at Facebook did not consider some of the broader social consequences of the way in which they optimized the news feed, the way in which they optimized the business practice to figure out whether or not, for instance, there were Russian spies highjacking the platform.

You’d be fair to respond, as many of them have, to say “wait a minute, is it our job as a tech company to know whether or not Russian spies are infiltrating our platform?” That’s not exactly the corner shop guy doesn’t know whether or not he’s selling throat lozenges to a Russian spy, so how is that a ‘know your customer’ obligation?

What I’m trying to get at is that these technologies actually create hyper-empowered corporate entities in a way that hasn’t previously been possible. Technology that was designed to optimize ads. But it’s overoptimizing. And when people talk in a technical sense about what they’re worried about AI doing, it’s that overoptimization problem.

It’s that the parameters that you set, whether it’s in prediction or something else, the machines don’t know better, they just go. Here’s the hot take for today: Artificial intelligence? Not all that intelligent in the way that many of us would think about intelligence. They are very, very narrow in their thinking, ‘transfer learning’ what we think of as context, certainly ethics, is just not there.

The first step we need to collectively demand is that these companies, and individual researchers … that we actually think really hard about how to build ethics in from the start. How to make sure that these systems that are optimizing for one thing, aren’t accidentally undoing major progress in social justice that we’ve had over the past 50 years.

That I think would solve – in the same way that there are very basic steps you can take to fix your personal cybersecurity that aren’t all that sexy or all that interesting – I think those sorts of steps would probably deal with a significant percentage (maybe 70 per cent) of the great concerns that I have right now about AI.

But you’re absolutely right, this is a case of technology that is general purpose – that is going to be implemented broadly across society and is going to be implemented by some bad actors. That’s why the other piece I think we need to be working on, and full disclosure one of the things we’re working on at my lab at MIT, is how do you pick these systems apart.

It used to be that when you had linear code you could debug a system. You’d find the thing that was wrong with it, and ‘oh, it’s causing the system to crash,’ take that line out, put in a different line, you’re good.

Other hot take: Anyone who tells you that they know exactly how a neural net works – that’s the underlying architecture of artificial intelligence – how it gets from A to B? That person is lying to you. We just don’t know. It is a sort of magic, even to the most advanced AI researchers.

There is work that is happening right now to design systems from the start, so we can fix them later on. We can see what their thought process is, to put a non-techie gloss on it, and literally we’re going through and doing research that if it were in the biological context, we would say we are partially lobotomizing artificial intelligence. We are cutting out individual neurons and seeing, ‘does that make it have this other effect,’ and that’s a way you can actually start to probe the system, and attack it before it’s ever introduced into the wild to see if it’s going to have those negative implications.

Until we make those basic science advances that anyone using this can actually probe, we’re never as individuals, as researchers, as enforcement officers in the civil rights context, even going to be able to know if these systems are being designed maliciously, or just negligibly. So the next step is we’ve got to do that basic science and then be a little hesitant about integrating this technology into all these major human decisions until we’re confident that we know why they make the decisions they do.

Question: The trouble becomes when you have to move into the regulatory environment because technology now outpaces our ability to regulate it. All you had to do was watch some of the (U.S.) committee hearings around Facebook with (founder Mark) Zuckerberg and the politicians who didn’t even understand the basics of social media, never mind the technology. How do you balance the regulatory environment, which was designed to keep these things in check, with the fact that most people who are creating those regulations don’t even understand the basics of the technology, never mind the more complicated parts of it, like a neural net.

Answer: A lot of regulators have pearl-clutching syndrome. If there’s a new technology introduced into the universe, “Oh my God, we must stop it.” On the one hand, the complexity of these technologies, I will argue, is sometimes a good thing. I used to joke that in my old job I was mostly in the regulatory forbearance business. There’s an old adage that we have, it mostly pertains to bitcoin, but if cash were invented today the Treasury would never allow it. That’s pretty much true in the United States.

The ignorance results in an opportunity for some of the technology to, yes, get out ahead of regulators, but also to be used in ways that were previously uncontemplated, before someone puts a box around it. That box can really be very limiting. It will be interesting to watch, in the next five or 10 years, what happens in Europe vis-à-vis data advances now that we have the General Data Protection Regulation (GDPR), many of you are probably familiar with it, it’s a new super-strong privacy law.

The other side of it is that many, not all, legislators – just in the United States I’ll speak for – are wildly ignorant about this stuff. And it’s a function of their own creation.

The U.S. Congress, a little known fact, used to have an office of technology assessment, which was in the business of doing what I used to do at the State Department and in the White House, which was explain complex technology to people who had no tech background. And that was a very important thing because it meant a Senator of a certain age, or from a state that has no development in the space, could get smart on the technology, to at least be fluent and know what’s going on.

Maybe they’re not ‘gramming with the grandkids but at least they’re still engaged. And then they cut its funding. And then they got rid of it. And now it doesn’t exist.

So now they’re at the mercy of these often young staffers who are trying their very best to explain it. But the staffer that does tech for a senator or congressperson, they also probably do energy and environment, they probably also do labour workforce issues. They’re not going to be experts at this stuff. We have to start figuring out how to fix the institutions of getting our members of Congress and our elected officials smarter on this stuff, and we have to take it upon ourselves as voters to no longer accept ignorance as acceptable in this context. You would not have somebody in the Cabinet room at the White House walk in any more and say, ‘I don’t understand this technology stuff.’ That used to be very fashionable back in the day.

Now, to walk into the Cabinet room and say ‘I don’t understand this technology stuff,’ is tantamount to walking into the Cabinet room and saying ‘I don’t understand this economic stuff. I’m sure you don’t talk any economics.’ It just doesn’t happen.

We as a society have to stigmatize ignorance among our elected officials because increasingly a lot of the major questions they’re going to have to opine on – basic questions like, ‘how much enforcement budget do you give to the Department of Justice civil rights division?’ – it turns out the answer to that needs to be ‘10 times more than you did before’ because you have to hire a data scientist and they’re a lot more expensive than cops.

That kind of understanding is something that we as voters have to demand from our elected officials, and that frankly is only going to happen at the ballot box because it’s not going to happen through a program of self-education.

Question: Social media has been infusing all this kind of stuff in the background. What we have now are people who are essentially shouting at each other, whether it be on Twitter or arguing in a Facebook feed. Maybe there are a few more niceties on Instagram. How do you recreate civil discourse, which used to be the norm? I think that’s one of the things media is struggling with. How do we get humans to be nicer to each other?

Answer: I think part of it is fixing our algorithms. And I hate to give a technical answer to a human question but it’s this: If you go on Facebook and you start to look at videos about the weather (“Where does weather come from?”), you’re about eight videos away from ‘the moon landing was faked’ and ‘vaccines cause autism.’ It pushes you to the extremity. That’s an example of an algorithm that has been optimized for engagement. ‘Yeah I want to watch how the moon landing was faked, that’s fascinating.’ (And totally ridiculous.)

It’ll keep you watching. The same thing has been true on our Facebook feeds for a long time. The same thing is increasingly true now on Twitter. That which is loudest is engaged with the most, and that which is engaged with the most is presented to you the most.

As a result of that, that’s what you’re seeing. That’s a lens through which you see the world, and that is the tone that you will replicate in our social media. I think it actually falls back to the platform. I don’t think this is a regulatory issue. I think this is a case, and you’ve already seen Facebook is one of a few companies that has said: ‘We’re going to try to fix this, we’re going to try to change, at the expense of user engagement, at the expense of how long people keep their eyeballs on the platform and therefore at the expense of cost, and we’re going to try to make it a more human experience.’

I think more and more there is a market opportunity for companies with platforms that try to create more of that civic discourse.

Is it going to be like an academic seminar? No. But will it keep this form of self-radicalization that we have now seen in the political context in the U.S. and abroad from happening? I think so. I think you take your average American, your average North American, your average voter, and I don’t think they want to be that self-radicalized and plunged into that crazy world of mutual recrimination all the time.

I think they want a chance to read other perspectives and to be human. And if they can’t get it from the news, then maybe they’ll get it from journalists who are tweeting.

And if not, we’ll see where we go from there.

Published by Sean Stanleigh

Managing editor of Globe Content Studio

Leave a comment

Your email address will not be published. Required fields are marked *