Congressman: 'Get Control' of 'Scary' Artificial Intelligence, Privacy Implications

David Hanson, founder and chief executive officer of Hanson Robotics, brought Sophie the robot to a thematic joint meeting of the UN General Assembly Second Committee and ECOSOC on sustainable development in the age of rapid technological change in New York on Oct. 12, 2017. (Albin Lohr-Jones/Sipa via AP Images)

WASHINGTON – Rep. Pete Olson (R-Texas), the co-chairman of the Congressional Artificial Intelligence Caucus, told PJM that Congress shouldn’t get in the way of advances in artificial intelligence but lawmakers must make sure that individual privacy rights are not violated in the process.

“Here we talked about morality, that’s an issue because this involves access to all sorts of information and these machines do things we can’t imagine. They can actually learn like human beings do and apply some type of reasoning, you know, it’s all a program but still we’ve got to get control of it because it’s pretty scary,” Olson said after a recent congressional briefing: “Machines that Learn: Can They Also Be Taught Human Values?”

Olson said there are “still a lot of things that have to get worked out” before self-driving cars hit the market, adding that it will be “hard” to reach the point where people can “let go of the wheel” and trust a self-driving car.

“They’re doing some tests out there. They’ve had some issues. Tesla had a little problem with one of their vehicles, but you know that’s going to happen. We’ll learn from those mistakes and again. It’s the future. I mean, if they could have a car that could drive itself, it’s going to be much safer because they will all be talking to each other,” Olson said.

“You could text and drive, whatever, you could get into a car drunk and you don’t have to find a ride home. Military veterans who are crippled or senior citizens who cannot walk would have mobility. That’s awesome. That’s the future, but we have to just not rush it and take our time.”

Olson said he already has concerns with his pickup truck, which automatically sends notifications to his car dealer when it’s time for an oil change or when it senses low tire pressure. He added that self-driving cars pose even greater concerns related to hacking.

“The car told the dealership that my tire was low and they check my oil. It’s like the car is narcing on me. It’s a tattletale. My point is that’s good, but it’s information – if they have that information, what can they do? Maybe take control of the car’s steering? If they’ve got a car that can drive without people, that can be at least possible,” the congressman said. “We just have to make sure [consumers] are protected because once that car is talking there’s a way in and there’s a way out. The bad guys, they want to get in and they will find it. And that means we have to keep adapting, keep working, keep testing and probing to make sure we see areas of weakness and make sure they are identified.”

Olson told PJM that advances in artificial intelligence are “coming quickly” but “the speed is up to us.”

“I mean, we in Congress just have to – you know, the market will do what the market will do but make sure they are doing it safe. If we get involved, just don’t get in the way but make sure we’re doing something safely, that represents privacy and individual rights. There’s all sorts of scary things out there that they are talking about – the fact that it’s all data, and bad guys can grab data,” he added.

David Danks, head of the Carnegie Mellon University Department of Philosophy, said data has given Uber a competitive advantage in self-driving car development.

“They are ahead because their software is better. And why is the software better? Because they’ve got more data, and so the competitive advantage is the data. And I think what’s needed is for regulation to be set by people who understand these complexities, and now it’s not,” said Danks, a panelist at the briefing. “It is an unbelievably hard problem and we really need to be having is an open discussion, which is not happening right now.”

According to Danks, technology companies are making data-related decisions and regulators as well as judges are “kind of saying, well, this seems right to me” while the country is not having an “open debate” about the handling of the personal data that’s being collected. Danks said the European Union took a “really hard-line view on data” with the General Data Protection Regulation (GDPR).

“We in the U.S. sort of abdicated that, and so we’ve got to figure out what we are going to do,” he added. “We’ve got to do something.”