12969
151
April 23rd 2013
ERIA QUINT: Hello and welcome to another dynamic episode of ShowDown! I’m your host, Eria Quint. On April 12th of this year, the mining and shipping consortium ArcCorp announced that they are going to field-test their first AI-piloted cargo vessel on the infamous Earth-Pinecone run. Public reaction to the news has been mixed at best. Some applaud this as a natural, next step in flight mechanics, while others decry the company for depriving thousands of their jobs in the midst of tough economic times.
We have two very special guests joining us to hash out this debate. The first is the host of the very popular shipping broadcast, Clean Shot, Mr. Craig Burton.
CRAIG BURTON: Hi there.
ERIA QUINT: The second is Dr. Yusef Phan, Chief Programming Executive at BabbageCorp and one of the architects of the avionic system that was ultimately used as part of ArcCorp’s proprietary AI-flight system. Welcome, doctor.
DR. YUSEF PHAN: Glad to be here. Thank you for having me.
ERIA QUINT: So Craig, ever since ArcCorp’s announcement, you’ve been quite critical of the entire concept.
CRAIG BURTON: Guess that’s a polite way to put it.
ERIA QUINT: How would you put it?
CRAIG BURTON: I think it’s a travesty to the good men and women who risk their lives transporting goods.
DR. YUSEF PHAN: I would think this system would address precisely that. Save them from risking their lives. It’s dangerous out there.
CRAIG BURTON: Damn right it is. Point is, no damn system’s ever gonna be as good as an experienced pilot.
DR. YUSEF PHAN: Initially maybe but these are adaptive programs, capable of learning and growing as they continue to fly missions.
CRAIG BURTON: Tell me something, you think it’ll ever be able to tell when it’s about to be ambushed?
DR. YUSEF PHAN: I’m sure scanning a vessel to see weapons and shields activate is a pretty good indication of malicious intent.
CRAIG BURTON: Yeah, how will it know that the malicious intent is directed at it or some other threat?
DR. YUSEF PHAN: I don’t know. You’re presenting a hypothetical situation that you alone know the parameters of.
CRAIG BURTON: My daddy used to tell me, “Son, it’s easy to be smart, ain’t easy to be wise.” The point is, you’re asking a computer to make a judgment call. Now I don’t care how smart it gets, it’s still gonna be making decisions on logic, and I think we all know, people ain’t that logical.
DR. YUSEF PHAN: Here’s what I do know. ArcCorp reported over three hundred million Credits in losses last cycle due to human error and compensation payouts to the families of pilots lost during pirate and Vanduul attacks.
CRAIG BURTON: Jeez, you sure you ain’t really a member of ArcCorp’s marketing team?
DR. YUSEF PHAN: Leaving the money aside for the moment, I would think that saving the lives of your fellow shippers might be worth the inconvenience of looking for work.
CRAIG BURTON: Don’t try and twist it. My heart breaks every time I hear about some hauler getting blasted while on a run, but we all signed up for the job. Ain’t nobody ever spared us the risks. We took them in stride like we do everything else. Point is, your program is taking away the opportunity for us to provide for our families.
ERIA QUINT: I don’t think anyone’s disagreeing with the possibility of saving pilots’ lives on transport routes through dangerous space but let’s talk for a moment about the financial implications of this.
CRAIG BURTON: Corporate lifeblood, you mean.
ERIA QUINT: To offer his perspective on this matter, we’re joined by Economic Analyst Ryu Tarkovsky.
RYU TARKOVSKY: Hello.
ERIA QUINT: So let’s talk cost. You’ve offered a pretty comprehensive analysis of the possible cost risks and benefits of adopting a fleet of AI-controlled haulers.
CRAIG BURTON: There’s a horrible image.
RYU TARKOVSKY: Yes, Eria, that’s correct.
ERIA QUINT: What did you discover?
RYU TARKOVSKY: Humanity’s always had an aversion to the notion of AI. Particularly after the disastrous Artemis expedition —
DR. YUSEF PHAN: Technically, we don’t know if Janus was indeed responsible for that.
RYU TARKOVSKY: Regardless, the public seems to like it in theory, but actual application seems to be a different matter. Anthropologists theorize that there’s an innate part of our being that needs to control. Hence why flight computers are designed to assist the pilot, handling the complex mathematics necessarily to fly, as opposed to deferring control to the system.
CRAIG BURTON: It’s also boring as hell. I mean, why wouldn’t you want to fly it yourself if you had the choice?
DR. YUSEF PHAN: Oh I don’t know, maximizing fuel efficiency, operating with reflexes that are significantly higher than your own …
ERIA QUINT: Gentlemen, please. Continue, Mr. Tarkovsky.
RYU TARKOVSKY: So the first hurdle would be their customers embracing the concept of AI in common use. Financially speaking, it could be a worthy investment in the long run. The first vessel is apparently a modified Caterpillar but ultimately, if they chose to spend the Credits, they could design a brand new type of ship that wouldn’t need to be designed with life support functions in mind, thus eliminating many systems and increasing the cargo capacity, thereby increasing the money generated per run.
Unfortunately, this initial phase will be their costliest. As Dr. Phan said, these are learning systems. Every systems engineer I’ve spoken to anticipates a small percentage of ships will cause some very costly accidents, accidents that could have been easily avoided by having a human pilot who could ‘read the circumstances’ if you will.
The situation will effectively become a financial game of Dead Man’s Bluff: can ArcCorp weather the financial drain of accidents and lost cargo long enough for the AI to learn enough to stop making those mistakes?
DR. YUSEF PHAN: If I may interject, Mr. Tarkovsky, you’re assuming that the AIs aren’t going to be able to learn from each other. The programs will be constantly sending feeds of their activities to relay stations, which will be funneled into a central hub then beamed back out to the drone pilots. So all will learn from each other’s mistakes and successes.
CRAIG BURTON: Right, well, Comm traffic ain’t exactly reliable on a good day …
DR. YUSEF PHAN: ArcCorp has already announced that they plan to include human pilots to act as ‘monitors’ while the AI system is vetted and learning. So I think your concerns are a little unfounded, Mr. Burton.
CRAIG BURTON: Right, well, the first time a 200,000 Kg tanker ship crashes headlong into an Orbital because it didn’t get the correct update that the docking arm was under construction, I’m sure you’ll be singin’ a different tune.
DR. YUSEF PHAN: If you’re so distrusting of computer-controlled craft, Mr. Burton, why do you let your own computer take control when navigating a jump point?
CRAIG BURTON: That’s different. That’s no more advanced than playing back a recording through the ship’s controls, and we’ve been doing for more than 1000 years. So it ain’t an AI thing at all, really.
DR. YUSEF PHAN: It’s hypocritical is what it is.
ERIA QUINT: We’re going to take a quick break. When we get back, we’ll examine the legal ramifications of AI-piloted craft; in the past, pilots are partially liable for damage; now would it be solely the Corp or would the AI programming also be held accountable? You won’t want to miss it. So reload your guns and get ready for another ShowDown!