Killer Robots could end the world as we know it
5 posters
Page 1 of 1
Killer Robots could end the world as we know it
Military’s killer robots must learn warrior code
I Robot
Automatons revolt to form a dictatorship over humans in Asimov's I, Robot
Leo Lewis
Autonomous military robots that will fight future wars must be programmed to live by a strict warrior code or the world risks untold atrocities at their steely hands.
The stark warning – which includes discussion of a Terminator-style scenario in which robots turn on their human masters – is issued in a hefty report funded by and prepared for the US Navy’s high-tech and secretive Office of Naval Research .
The report, the first serious work of its kind on military robot ethics, envisages a fast-approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans. Eventually, it notes, robots could come to display significant cognitive advantages over Homo sapiens soldiers.
“There is a common misconception that robots will do only what we have programmed them to do,” Patrick Lin, the chief compiler of the report, said. “Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person.” The reality, Dr Lin said, was that modern programs included millions of lines of code and were written by teams of programmers, none of whom knew the entire program: accordingly, no individual could accurately predict how the various portions of large programs would interact without extensive testing in the field – an option that may either be unavailable or deliberately sidestepped by the designers of fighting robots.
The solution, he suggests, is to mix rules-based programming with a period of “learning” the rights and wrongs of warfare.
A rich variety of scenarios outlining the ethical, legal, social and political issues posed as robot technology improves are covered in the report. How do we protect our robot armies against terrorist hackers or software malfunction? Who is to blame if a robot goes berserk in a crowd of civilians – the robot, its programmer or the US president? Should the robots have a “suicide switch” and should they be programmed to preserve their lives?
The report, compiled by the Ethics and Emerging Technology department of California State Polytechnic University and obtained by The Times, strongly warns the US military against complacency or shortcuts as military robot designers engage in the “rush to market” and the pace of advances in artificial intelligence is increased.
Any sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike” aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned.
“A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives,” the report noted.
A simple ethical code along the lines of the “Three Laws of Robotics” postulated in 1950 by Isaac Asimov, the science fiction writer, will not be sufficient to ensure the ethical behaviour of autonomous military machines.
“We are going to need a code,” Dr Lin said. “These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.”
Isaac Asimov’s three laws of robotics
1 A robot may not injure a human being or, through inaction, allow a human being to come to harm
2 A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law
3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Introduced in his 1942 short story Runaround
http://technology.timesonline.co.uk/tol/news/tech_and_web/article5741334.ece
I Robot
Automatons revolt to form a dictatorship over humans in Asimov's I, Robot
Leo Lewis
Autonomous military robots that will fight future wars must be programmed to live by a strict warrior code or the world risks untold atrocities at their steely hands.
The stark warning – which includes discussion of a Terminator-style scenario in which robots turn on their human masters – is issued in a hefty report funded by and prepared for the US Navy’s high-tech and secretive Office of Naval Research .
The report, the first serious work of its kind on military robot ethics, envisages a fast-approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans. Eventually, it notes, robots could come to display significant cognitive advantages over Homo sapiens soldiers.
“There is a common misconception that robots will do only what we have programmed them to do,” Patrick Lin, the chief compiler of the report, said. “Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person.” The reality, Dr Lin said, was that modern programs included millions of lines of code and were written by teams of programmers, none of whom knew the entire program: accordingly, no individual could accurately predict how the various portions of large programs would interact without extensive testing in the field – an option that may either be unavailable or deliberately sidestepped by the designers of fighting robots.
The solution, he suggests, is to mix rules-based programming with a period of “learning” the rights and wrongs of warfare.
A rich variety of scenarios outlining the ethical, legal, social and political issues posed as robot technology improves are covered in the report. How do we protect our robot armies against terrorist hackers or software malfunction? Who is to blame if a robot goes berserk in a crowd of civilians – the robot, its programmer or the US president? Should the robots have a “suicide switch” and should they be programmed to preserve their lives?
The report, compiled by the Ethics and Emerging Technology department of California State Polytechnic University and obtained by The Times, strongly warns the US military against complacency or shortcuts as military robot designers engage in the “rush to market” and the pace of advances in artificial intelligence is increased.
Any sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike” aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned.
“A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives,” the report noted.
A simple ethical code along the lines of the “Three Laws of Robotics” postulated in 1950 by Isaac Asimov, the science fiction writer, will not be sufficient to ensure the ethical behaviour of autonomous military machines.
“We are going to need a code,” Dr Lin said. “These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.”
Isaac Asimov’s three laws of robotics
1 A robot may not injure a human being or, through inaction, allow a human being to come to harm
2 A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law
3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Introduced in his 1942 short story Runaround
http://technology.timesonline.co.uk/tol/news/tech_and_web/article5741334.ece
Obama Rulz- Paratrooper
- Number of posts : 163
Registration date : 2009-02-20
Character sheet
test:
Re: Killer Robots could end the world as we know it
This is one more area where bloodthirsty neocons are causing problems and another reason why we need to unite under the UN in a global government to end war.
Obama Rulz- Paratrooper
- Number of posts : 163
Registration date : 2009-02-20
Character sheet
test:
Re: Killer Robots could end the world as we know it
Obama Rulz wrote:This is one more area where bloodthirsty neocons are causing problems and another reason why we need to unite under the UN in a global government to end war.
one world government , pfft , I laugh at the bumbling of the U.N.
submarinepainter- Ranger Qualified
- Number of posts : 566
Age : 65
Locale : taxationland Maine
Registration date : 2007-08-19
Character sheet
test: 1
Re: Killer Robots could end the world as we know it
submarinepainter wrote:Obama Rulz wrote:This is one more area where bloodthirsty neocons are causing problems and another reason why we need to unite under the UN in a global government to end war.
one world government , pfft , I laugh at the bumbling of the U.N.
So you're in favor of widespread war and violence? That's sick.
Obama Rulz- Paratrooper
- Number of posts : 163
Registration date : 2009-02-20
Character sheet
test:
Re: Killer Robots could end the world as we know it
Obama Rulz wrote:submarinepainter wrote:Obama Rulz wrote:This is one more area where bloodthirsty neocons are causing problems and another reason why we need to unite under the UN in a global government to end war.
one world government , pfft , I laugh at the bumbling of the U.N.
So you're in favor of widespread war and violence? That's sick.
When has the UN ever stopped a war?
All they do is disarm those trying to protect themselves then ignore when they are massacred as its outside their 'mandate'
Re: Killer Robots could end the world as we know it
Obama Rulz wrote:submarinepainter wrote:Obama Rulz wrote:This is one more area where bloodthirsty neocons are causing problems and another reason why we need to unite under the UN in a global government to end war.
one world government , pfft , I laugh at the bumbling of the U.N.
So you're in favor of widespread war and violence? That's sick.
where did I say that?? by the way your Dictastor is sending more troops to was is he not ??????????????
submarinepainter- Ranger Qualified
- Number of posts : 566
Age : 65
Locale : taxationland Maine
Registration date : 2007-08-19
Character sheet
test: 1
Re: Killer Robots could end the world as we know it
Obama Rulz wrote:Military’s killer robots must learn warrior code
I Robot
Automatons revolt to form a dictatorship over humans in Asimov's I, Robot
Leo Lewis
Autonomous military robots that will fight future wars must be programmed to live by a strict warrior code or the world risks untold atrocities at their steely hands.
The stark warning – which includes discussion of a Terminator-style scenario in which robots turn on their human masters – is issued in a hefty report funded by and prepared for the US Navy’s high-tech and secretive Office of Naval Research .
The report, the first serious work of its kind on military robot ethics, envisages a fast-approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans. Eventually, it notes, robots could come to display significant cognitive advantages over Homo sapiens soldiers.
“There is a common misconception that robots will do only what we have programmed them to do,” Patrick Lin, the chief compiler of the report, said. “Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person.” The reality, Dr Lin said, was that modern programs included millions of lines of code and were written by teams of programmers, none of whom knew the entire program: accordingly, no individual could accurately predict how the various portions of large programs would interact without extensive testing in the field – an option that may either be unavailable or deliberately sidestepped by the designers of fighting robots.
The solution, he suggests, is to mix rules-based programming with a period of “learning” the rights and wrongs of warfare.
A rich variety of scenarios outlining the ethical, legal, social and political issues posed as robot technology improves are covered in the report. How do we protect our robot armies against terrorist hackers or software malfunction? Who is to blame if a robot goes berserk in a crowd of civilians – the robot, its programmer or the US president? Should the robots have a “suicide switch” and should they be programmed to preserve their lives?
The report, compiled by the Ethics and Emerging Technology department of California State Polytechnic University and obtained by The Times, strongly warns the US military against complacency or shortcuts as military robot designers engage in the “rush to market” and the pace of advances in artificial intelligence is increased.
Any sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike” aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned.
“A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives,” the report noted.
A simple ethical code along the lines of the “Three Laws of Robotics” postulated in 1950 by Isaac Asimov, the science fiction writer, will not be sufficient to ensure the ethical behaviour of autonomous military machines.
“We are going to need a code,” Dr Lin said. “These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.”
Isaac Asimov’s three laws of robotics
1 A robot may not injure a human being or, through inaction, allow a human being to come to harm
2 A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law
3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Introduced in his 1942 short story Runaround
http://technology.timesonline.co.uk/tol/news/tech_and_web/article5741334.ece
are they dems or GOP????
namvet- Ranger Qualified
- Number of posts : 399
Locale : Missouri
Registration date : 2007-08-20
Re: Killer Robots could end the world as we know it
BTW I have always been concerned about AI SkyNet type things and hope a human is always in the loop!
Re: Killer Robots could end the world as we know it
Obama Rulz wrote:
So you're in favor of widespread war and violence? That's sick.
Careful there my fine friend, half the guys on this board either have been or are on the front line keeping "war" from climbing up your ass.
Loner- Infantry
- Number of posts : 98
Locale : Clock Tower
Registration date : 2007-11-23
Re: Killer Robots could end the world as we know it
NonConformist wrote:BTW I have always been concerned about AI SkyNet type things and hope a human is always in the loop!
if the dems control it were fucked !!!!!
namvet- Ranger Qualified
- Number of posts : 399
Locale : Missouri
Registration date : 2007-08-20
Re: Killer Robots could end the world as we know it
namvet wrote:NonConformist wrote:BTW I have always been concerned about AI SkyNet type things and hope a human is always in the loop!
if the dems control it were fucked !!!!!
Theyd give our missile defense controls to some immigrant Iranian who hates America but the dems let come here knowing them
Re: Killer Robots could end the world as we know it
NonConformist wrote:namvet wrote:NonConformist wrote:BTW I have always been concerned about AI SkyNet type things and hope a human is always in the loop!
if the dems control it were fucked !!!!!
Theyd give our missile defense controls to some immigrant Iranian who hates America but the dems let come here knowing them
you mean one that was freed from Gitmo???? hahaha..................
namvet- Ranger Qualified
- Number of posts : 399
Locale : Missouri
Registration date : 2007-08-20
Re: Killer Robots could end the world as we know it
namvet wrote:NonConformist wrote:namvet wrote:NonConformist wrote:BTW I have always been concerned about AI SkyNet type things and hope a human is always in the loop!
if the dems control it were fucked !!!!!
Theyd give our missile defense controls to some immigrant Iranian who hates America but the dems let come here knowing them
you mean one that was freed from Gitmo???? hahaha..................
Of course, thats even better and more 'fair' according to the Dems, after all why couldnt we trust them LOL
Re: Killer Robots could end the world as we know it
NonConformist wrote:namvet wrote:NonConformist wrote:namvet wrote:NonConformist wrote:BTW I have always been concerned about AI SkyNet type things and hope a human is always in the loop!
if the dems control it were fucked !!!!!
Theyd give our missile defense controls to some immigrant Iranian who hates America but the dems let come here knowing them
you mean one that was freed from Gitmo???? hahaha..................
Of course, thats even better and more 'fair' according to the Dems, after all why couldnt we trust them LOL
and now Osama's gonna close Gitmo down. but what to with the killers. AHA !!! rehab em so they can vote in the next election.
namvet- Ranger Qualified
- Number of posts : 399
Locale : Missouri
Registration date : 2007-08-20
Re: Killer Robots could end the world as we know it
Obama Rulz wrote:This is one more area where bloodthirsty neocons are causing problems and another reason why we need to unite under the UN in a global government to end war.
There's an oxymoron for ya' right there. LMAO!
The same UN that got busted for rape camps? That wonderful organization? The one that side steps conflict until the genocide is complete?
http://www.worldnetdaily.com/news/article.asp?ARTICLE_ID=42088
But of coarse they did a thorough SELF investigation in all of but two weeks! Holy crap, I wish I could investigate myself if I ever come up on charges. hahahaha.
I wont even bring up the Oil for Food scandal, wait, yes I will!!!
The same UN that has done nothing about diamond mine slavery, mass hatchet deaths and other atrocities around Africa is going to save us from ourselves. Give me a break. And who's money and troops do you suppose will be used if the UN ever grows a set of balls to resolve these issues.
Wow, I really want to live in a fantasy land, things sound so much easier and wonderful that way.
Loner- Infantry
- Number of posts : 98
Locale : Clock Tower
Registration date : 2007-11-23
Re: Killer Robots could end the world as we know it
NonConformist wrote:
Theyd give our missile defense controls to some immigrant Iranian who hates America but the dems let come here knowing them
Too late. The KKKlintons took care of selling missle defense secrets to China years ago.
Remember the contributions for Missles scandal?
The Klintons sold out the American Workers and National Security for ILLEGAL FOREIGN CAMPAIGN CONTRIBUTONS. Taking contributions from foreigners or foreign government is illegal, its the law.
But hey, thats ok they're liberals, and liberals are not expected to have morals.
Loner- Infantry
- Number of posts : 98
Locale : Clock Tower
Registration date : 2007-11-23
Re: Killer Robots could end the world as we know it
libs are the true American terrorists. their a threat to national security
namvet- Ranger Qualified
- Number of posts : 399
Locale : Missouri
Registration date : 2007-08-20
Similar topics
» US Troops subject to World Court?
» World's Oldest Dog Turns 21
» Research: God did speak world into existence
» Scientists Urge World To ‘Have the Courage to Do Nothing’ At
» Environmentalism = Communism
» World's Oldest Dog Turns 21
» Research: God did speak world into existence
» Scientists Urge World To ‘Have the Courage to Do Nothing’ At
» Environmentalism = Communism
Page 1 of 1
Permissions in this forum:
You cannot reply to topics in this forum