About

AssembleMe is an information science blog written by Julius Schorzman that frequently sways off-topic.

Julius is the CEO of the Google Ventures backed company DailyCred. DailyCred makes working with OAuth super duper simple.

To view some of my old projects, visit Shopobot or CodeCodex.

You can follow me on Twitter if you really want to @schorzman.

Search
Contact Me
This form does not yet contain any fields.
    « 404 In Style | Main | I, Robot II »
    Tuesday
    Jul202004

    I, Robot III

    INFO SCIENCE: More A.I. musings by way of I, Robot impetus. (Worst sentence ever? Perhaps, but I'm going to stick with it.) Both via Ray Kurzweil's site.



    The first, Asimov's Three Laws of Robotics unsafe?



    "AI could improve unexpectedly fast once it is created," warns Eliezer Yudkowsky, Director of the Singularity Institute for Artificial Intelligence. "Computer chips already run at ten million times the serial speed of human neurons and are still getting faster… An AI can absorb hundreds or thousands of times as much computing power, where humans are limited to what they're born with. [And] an AI is a recursively self-improving pattern.



    "Just as evolution creates order and structure enormously faster than accidental emergence, we may find that recursive self-improvement creates order enormously faster than evolution. If so, we may have only one chance to get this right."



    Asimov's laws are not sufficient, said Michael Anissimov, writing in an article on the 3 Laws Unsafe site. "It's not so straightforward to convert a set of statements into a mind that follows or believes in those statements. Two, semantic ambiguity means that without personally understanding the reasons for the laws and the original intent, a robot might misinterpret their meaning, leading to problems. Third, Asimov's Laws ignore the possibility that a robot will acquire the ability to reprogram itself -- an inevitable eventuality if intelligent robots are created."



    The second, Robots (Probably) Won't Turn Against Humanity, Experts Say in Their Defense. (Could I love that title any more? I'd have to say no.)



    "The message is that they are dangerous and they will potentially have the ability to harm biological humans," said a New York University professor of computer science, Demetri Terzopoulos.







    While Sony's robot dog, AIBO, has yet to cause harm to anyone, software developers like those at aibopet.com are selling downloadable programs to change AIBO's personalities, help him make different sounds, and even imitate movie characters like the villainous robots from Battlestar Galactica. If little robotic dogs can be hacked, some wonder if the human-sized robots can truly escape interference.


    References (11)

    References allow you to track sources for this article, as well as articles that were written in response to this article.

    Reader Comments

    There are no comments for this journal entry. To create a new comment, use the form below.

    PostPost a New Comment

    Enter your information below to add a new comment.

    My response is on my own website »
    Author Email (optional):
    Author URL (optional):
    Post:
     
    Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>